If I have an img tag like
<img src="example.png" />
and I set it via
myImg.src = "example.png";
to the same value again, will this be a no-op, or will browsers unnecessarily redraw the image? (I'm mainly interested in the behaviour of IE6-8, FF3.x, Safari 4-5 and Chrome.)
I need to change many (hundreds of) images at once, and manually comparing the src attribute might be a little bit superfluous - as I assume, that the browser already does this for me?
Don't assume the browser will do it for you. I am working on a project of similar scale which requires hundreds of (dynamic-loading) images, with speed as the top priority.
Caching the 'src' property of every element is highly recommended. It is expensive to read and set hundreds of DOM element properties, and even setting src to the same value may cause reflow or paint events.
[Edit] The majority of sluggishness in the interface was due to all my loops and processes. Once those were optimized, the UI was very snappy, even when continuously loading hundreds of images.
[Edit 2] In light of your additional information (the images are all small status icons), perhaps you should consider simply declaring a class for each status in your CSS. Also, you might want to look into using cloneNode and replaceNode for a very quick and efficient swap.
[Edit 3] Try absolutely-positioning your image elements. It will limit the amount of reflow that needs to happen, since absolutely-positioned elements are outside of the flow.
When you change a bunch of elements at once, you're usually blocking the UI thread anyway, so only one redraw after the JavaScript completes is happening, meaning the per-image redraw really isn't a factor.
I wouldn't double check anything here, let the browser take care of it, the new ones are smart enough to do this in an efficient way (and it's never really been that much of a problem anyway).
The case you'll see here is new images loading and re-flowing the page as they load, that's what's expensive here, existing images are very minor compared to this cost.
I recommend using CSS Sprite technique. More info at: http://www.alistapart.com/articles/
You can use an image that contains all the icons. Then instead of changing the src attribute, you update the background property.
Related
Having discovered requestAnimationFrame just a moment ago, I have dived into all the information I could find about it. To name just a few of the resources I came across in case there are others looking for more info about it:
http://creativejs.com/resources/requestanimationframe/ - explains the basics about it.
http://www.html5rocks.com/en/tutorials/speed/animations/ - explains how to use it.
Anyway, all of these resources tell me something about how requestAnimationFrame works or how it could/should be used, but none of them tell me when it is right to use it.
Should I use it for animations (repeated changes to the style of an element, much like CSS animations)?
Should I use it when an automated event wants to change the css/classes of one or multiple elements?
Should I use it when an automated event wants to change the text value of one or multiple elements? (e.g. updating the value of a clock once every second)
Should I use it when an automated event wants to modify the DOM?
Should I use it when an automated event needs values like .offsetTop, .offsetLeft and then wants to change styles such as top and left a few lines further?
Should I use it when a user generated event causes any of the above changes?
TL;DR: When is it right to use requestAnimationFrame?
You shouldn't yet. Not really, at least. This is still experimental and may or may not reach full recommendation (it is still a working draft at this point). That said, if you don't care about older browsers, or willing to work with the polyfill available the best time to use it is when you are looking to draw things to the screen that will require the browser to repaint (most animations).
For many simple modifications of the DOM, this method is overkill. This only becomes useful when you are doing animations when you will be drawing or moving items quickly and need to make sure that the browser repainting is keeping up enough to make it smooth. It will allow you to ensure that every frame you calculate will be drawn to the screen. It also provides a utility for more accurate time measurements to your animations. The first argument is the time at which the paint will occur, so you can ensure that you are where you should be at that moment.
You should not use it when you are doing many simple modifications to the DOM, or things that don't need to be smoothly transitioned. This will be more expensive on your users' computers so you want to limit this to making things smoother in transitions, movements and animations. Forcing a frame redraw is not needed every time you make a change on the page, since the response will be fast enough most of the time you don't need to worry about that extra couple milliseconds between draws.
As the previous answer says, you should not use it in the discontinuous animation because that don't need to be smoothly transitioned. In most cases, it's used for properties which vary continuously with time.
I am using GWT to build a HTML application where the performance is correct in general.
Sometimes, it can load many objects in the DOM and the application becomes slow. I used Chrome Developer Tools Profiler to see where that time was spent (under Chrome once the app is compiled ie no GWT overhead) and it is clear that the methods getAbsoluteLeft()/getBoundingClientRect() consume the major part of this time.
Here is the implementation used under Chrome (com.google.gwt.dom.client.DOMImplStandardBase) :
private static native ClientRect getBoundingClientRect(Element element) /*-{
return element.getBoundingClientRect && element.getBoundingClientRect();
}-*/;
#Override
public int getAbsoluteLeft(Element elem) {
ClientRect rect = getBoundingClientRect(elem);
return rect != null ? rect.getLeft()
+ elem.getOwnerDocument().getBody().getScrollLeft()
: getAbsoluteLeftUsingOffsets(elem);
}
This makes sense to me, as the more elements in the DOM, the more time it may take to calculate absolute positions. But it is frustrating because sometimes you know just a subpart of your application has changed whereas those methods will still take time to calculate absolute positioning, probably because it unnecessarily recheck a whole bunch of DOM elements. My question is not necessarily GWT oriented as it is a browser/javascript related problem :
Is there any known solution to improve GWT getAbsoluteLeft/javascript getBoundingClientRect problem for large DOM elements application ?
I did not find any clues on the internet, but I thought about solution like :
(reducing number of calls for those methods :-) ...
isolate part of the DOM through iframe, in order to reduce the number of elements the browser has to evaluate to get an absolute position (although it would make difficult components to communicate ...)
in the same idea, there might be some css property (overflow, position ?) or some html element (like iframe) which tell the browser to skip a whole part of the dom or simply help the browser to get absolute position faster
EDIT :
Using Chrome TimeLine debugger, and doing a specific action while there are a lot of elements in the DOM, I have the average performance :
Recalculate style : nearly zero
Paint : nearly 1 ms
Layout : nearly 900ms
Layout takes 900ms through the getBoundingClientRect method. This page list all the methods triggering layout in WebKit, including getBoundingClientRect ...
As I have many elements in the dom that are not impacted by my action, I assume layout is doing recalculation in the whole DOM whereas paint is able through css property/DOM tree to narrow its scope (I can see it through MozAfterPaintEvent in firebug for example).
Except grouping and calling less the methods that trigger layout, any clues on how to reduce the time for layout ?
Some related articles :
Minimizing browser reflow
I finally solve my problem : getBoundingClientRect was triggering a whole layout event in the application, which was taking many times through heavy CSS rules.
In fact, layout time is not directly proportional to the number of elements in the DOM. You could draw hundred thousands of them with light style and layout will take only 2ms.
In my case, I had two CSS selectors and a background image which were matching hundred thousands of DOM elements, and that was consuming a huge amount of time during layout. By simply removing those CSS rules, I reduce the layout time from 900ms to 2ms.
The most basic answer to your question is to use lazy evaluation, also called delayed evaluation. The principle is that you only evaluate a new position when something it depends upon has changed. It generally requires a fair amount of code to set up but is much cleaner to use once that's done. You'd make one assignment to something (such as a window size) and then all the new values propagate automatically, and only the values that need to propagate.
I am in process of making a game where the health bar (animated) and some other info represented visually like some icons showing the number of bombs the player has etc. Now, this can be done both in canvas (by making another canvas for info that sits over the main canvas, or it can be done using many divs and spans with absolute positioning. This is my first time in making a browser based game so if any experienced people view this, tell me what you recommend. I would like to know that which method would be faster.
The game will also be running on mobile devices. Thanks!
There is no straighforward answer and I suggest you do FPS testing with different browser how it plays out for your use case. If you do not wish to go such in-depth I suggest you simply draw the elements inside canvas and if you need to hide them then leave out drawHUD() call from your rendering loop.
For HTML HUD overlay on <canvas> the following factors should be considered
Can the web browser compositor do hardware accelerated <canvas> properly if there are DOM elements upon the canvas
HTML / DOM manipulation will be always slower than <canvas> operations due to inherited complexity dealing with DOM elements
<canvas> pixel space stays inside <canvas> and it might be difficult to have pixel-perfect aligment if you try to draw elements on <canvas> outside the canvas itself
HTML offers much more formatting options for text than canvas drawString() - is HTML formatting necessary
Use the canvas. Use two canvases if you want, one overlaid over the other, but use the canvas.
Touching the DOM at all is slow. Making the document redo its layout because the size of DOM elements moved is very slow. Dealing with the canceling (or not) of even more events because there are DOM items physically on top of the canvas can be a pain and why bother dealing with that?
If your HUD does not update very often then the fastest thing to do would be drawing it to an in-memory canvas when it changes, and then always drawing that canvas to the main canvas when you update the frame. In that way your drawHud method will look exactly like this:
function drawHUD() {
// This is what gets called every frame
// one call to drawImage = simple and fast
ctx.drawImage(inMemoryCanvas, 0, 0);
}
and of course updating the HUD information would be like:
function updateHUD() {
// This is only called if information in the HUD changes
inMemCtx.clearRect(0, 0, width, height);
inMemCtx.fillRect(blah);
inMemCtx.drawImage(SomeHudImage, x, y);
var textToDraw = "Actually text is really slow and if there's" +
"often repeated lines of text in your game you should be" +
"caching them to images instead";
inMemCtx.fillText(textToDraw, x, y);
}
Since HUDs often contain text I really do urge caching it if you're using any. More on text performance here.
As others have said, there is no universally best approach, as it depends on the specifics of what you need to render, how often, and possibly what messaging needs to happen between graphical components.
While it is true the DOM reflows are expensive, this blanket warning is not always applicable. For instance, using position:fixed; elements avoids triggering reflows for the page (not necessarily within the element if there are non-fixed children). Repaint is (correct me if this is wrong) expensive because it is pixel pushing, and so is not intrinsically slower than pushing the same number of pixels to a canvas. It can be faster for some things. What's more, each has certain operations that have performance advantages over the other.
Here are some points to consider:
It's increasingly possible to use WebGL-accelerated canvas elements on many A-grade browsers. This works fine for 2D, with the advantage that drawing operations are sent to the GPU, which is MUCH faster than the 2D context. However this may not be available on some target platforms (e.g., at the time of this writing, it is available in iOS Safari but not in the iOS UIWebView used if you target hybrid mobile applications.) Using a library to wrap canvas can abstract this and use WebGL if its available. Take a look at pixi.js.
Conversely, the DOM has CSS3 animations/transitions which are typically hardware-accelerated by the GPU automatically (with no reliance on WebGL). Depending on the type of animation, you can often get much faster results this way than with canvas, and often with simpler code.
Ultimately, as a rule in software performance, understanding the algorithms used is critical. That is, regardless of which approach used, how are you scheduling animation frames? Have you looked in a profiler to see what things take the most time? This practice is excellent for understanding what is impacting performance.
I've been working on an app with multiple animations, and have implemented each component both as DOM and canvas. I was initially surprised that the DOM version was higher performant than the canvas (wrapped with KineticJS) version, though I know see that this was because all the animated elements were position:fixed and using CSS (under the hood via jQuery UI), thereby getting GPU performance. However the code to manage these elements felt clunky (in my case, ymmv). Using a canvas approach allows more pixel-perfect rendering, but then it loses the ability to style with CSS (which technically allows pixel-perfect rendering as well but may be more or less complex to achieve).
I achieved a big speed up by throttling the most complex animation to a lower framerate, which for my case is indistinguishable from the 60fps version but runs smooth as butter on an older iPad 2. Throttling required using requestAnimationFrame and clamping calls to be no more often than the desired framerate. This would be hard to do with CSS animations on the DOM (though again, these are intrinsically faster for many things). The next thing I'm looking at is syncing multiple canvas-based components to the same requestAnimationFrame loop (possibly independently throttled, or a round-robin approach where each component gets a set fraction of the framerate, which may work okay for 2-3 elements. (Incidentally, I have some GUI controls like sliders that are not locked to any framerate as they are should be as close to 60fps as possible and are small/simple enough that I haven't seen performance issues with them).
I also achieved a huge speed boost by profiling and seeing that one class in my code that had nothing to do with the GUI was having a specific method called very often to calculate a property. The class in question was immutable, so I changed the method to memoize the value and saw the CPU usage drop in half. Thanks Chrome DevTools and the flame chart! Always profile.
Most of the time, the number of pixels being updated will tend to be the biggest bottleneck, though if you can do it on the GPU you have effectively regained all the CPU for your code. DOM reflows should be avoided, but this does not mean avoid the DOM. Some elements are far simpler to render using the DOM (e.g. text!) and may be optimized by the browser's (or OS's) native code more than canvas. Finally, if you can get acceptable performance for a given component using either approach (DOM or canvas), use the one that makes the code simplest for managing that type of component.
Best advice is to try isolated portions in the different approaches, run with a profiler, use techniques to over-draw or otherwise push the limits to see which approach can run fastest, and do NOT optimize before you have to. The caveat to this rule is the question you are asking: how do I know in advance which technical approach is going to allow the best performance? If you pick one based on assuming the answer, you are basically prematurely optimizing and will live with the arbitrary pain this causes. If instead you are picking by rapid prototyping or (even better) controlled experiments that focus on the needs of your application, you are doing R&D :)
Browserquest displays their HUD using HTML elements, which has the benefit that you don't have to worry about redrawing etc. (and the performance will be pretty good, given that the entire browser engine is optimized to render the DOM pretty fast.
They (browserquest) also use several layered canvas elements for different game elements. I don't know the exact structure, but I guess that on which canvas an element is displayed depends on how often it needs to be redrawn.
I'm working on a HTML 5 game, it is already online, but it's currently small and everything is okay.
Thing is, as it grows, it's going to be loading many, many images, music, sound effects and more. After 15 minutes of playing the game, at least 100 different resources might have been loaded already. Since it's an HTML5 App, it never refreshes the page during the game, so they all stack in the background.
I've noticed that every resource I load - on WebKit at least, using the Web Inspector - remains there once I remove the <img>, the <link> to the CSS and else. I'm guessing it's still in memory, just not being used, right?
This would end up consuming a lot of RAM eventually, and lead to a downgrade in performance specially on iOS and Android mobiles (which I slightly notice already on the current version), whose resources are more limited than desktop computers.
My question is: Is it possible to fully unload a Resource, freeing space in the RAM, through JavaScript? Without having to refresh the whole page to "clean it".
Worst scenario: Would using frames help, by deleting a frame, to free those frames' resources?.
Thank you!
Your description implies you have fully removed all references to the resources. The behavior you are seeing, then, is simply the garbage collector not having been invoked to clean the space, which is common in javascript implementations until "necessary". Setting to null or calling delete will usually do no better.
As a common case, you can typically call CollectGarbage() during scene loads/unloads to force the collection process. This is typically the best solution when the data will be loaded for game "stages", as that is a time that is not time critical. You usually do not want the collector to invoke during gameplay unless it is not a very real-time game.
Frames are usually a difficult solution if you want to keep certain resources around for common game controls. You need to consider whether you are refreshing entire resources or just certain resources.
All you can do is rely on JavaScript's built in garbage collection mechanism.
This kicks in whenever there is no reference to your image.
So assuming you have a reference pointer for each image, if you use:
img.destroy()
or
img.parentNode.removeChild(img)
Worth checking out: http://www.ibm.com/developerworks/web/library/wa-memleak/
Also: Need help using this function to destroy an item on canvas using javascript
EDIT
Here is some code that allows you to load an image into a var.
<script language = "JavaScript">
var heavyImage = new Image();
heavyImage.src = "heavyimagefile.jpg";
......
heavyImage = null; // removes reference and frees up memory
</script>
This is better that using JQuery .load() becuase it gives you more control over image references, and they will be removed from memory if the reference is gone (null)
Taken from: http://www.techrepublic.com/article/preloading-and-the-javascript-image-object/5214317
Hope it helps!
There are 2 better ways to load images besides a normal <img> tag, which Google brilliantly discusses here:
http://www.youtube.com/watch?v=7pCh62wr6m0&list=UU_x5XG1OV2P6uZZ5FSM9Ttw&index=74
Loading the images in through an HTML5 <canvas> which is way way faster. I would really watch that video and implement these methods for more speed. I would imagine garbage collection with canvas would function better because it's breaking away from the DOM.
Embedded data urls, where the src attribute of an image tag is the actual binary data of the image (yeah it's a giant string). It starts like this: src="data:image/jpeg;base64,/9j/MASSIVE-STRING ... " After using this, you would of course want to use a method to remove this node as discussed in the other answers. (I don't know how to generate this base64 string, try Google or the video)
You said Worst scenario: Would using frames help, by deleting a frame, to free those frames' resources
It is good to use frame. Yes, it can free up resource by deleting the frames.
All right, so I've made my tests by loading 3 different HTML into an < article > tag. Each HTML had many, huge images. Somewhat about 15 huge images per "page".
So I used jQuery.load() function to insert each in the tag. Also had an extra HTML that only had an < h1 >, to see what happened when a page with no images was replacing the previous page.
Well, turns out the RAM goes bigger while you start scrolling, and shoots up when going through a particularly big image (big as in dimensions and size, not just size). But once you leave that behind and lighter images come to screen, the RAM consumption actually goes down. And whenever I replaced using JS the content of the page, the RAM consumption went really down when it was occupying to much. Virtual Memory remained always high and rarely went down.
So I guess the browser is quite smart about handling Resources. It does not seem to unload them if you leave it there for a long while, but as soon you start loading other pages or scrolling, it starts loading / freeing up.
I guess I don't have anything to worry about after all...
Thanks everyone! =)
Recently we redesigned one of our pages and suddenly page has been increased from 1MB to 1.98MB.
I compared the no of DOM elements and its increased from 1600 to 2300. I found the no of elements from the below command
document.getElementsByTagName('*').length
We did a load test and found the load time also increased from 1.1 to 2 seconds. Is this the reason for all problems.
I think the above line won't consider any inline css and js right , as they are not DOM elements.
Can you please suggest
Without knowing exactly what you redesigned, it's impossible to know what change caused the increase. But even a 1MB page is pretty large. JavaScript (and particularly jQuery) can change the number of DOM objects... consider this:
$('p').append('<span>Blah</span> <span>blah</span> <span>blah</span>');
That will add 3 DOM objects for each p tag on the page (which could be a lot!) and yet it adds only 71 bytes to your page. jQuery can similarly remove DOM objects. So I don't think the number of DOM objects is really much of a consideration.
The javascript that runs can manipulate the dom and create new nodes which would affect your count. However it shouldn't make the page load any slower as it's rendered on the client side.
I think you need to include more information if you expect to get a better answer.
Also you should look into browser plugins (for firefox) like Yslow, or firebug (net tab) that show you all the files being loaded and how long they load.
Anytime that you have more information crossing the wire, it will take longer. Therefore, with more DOM elements in the page, the loading time will be slower. I hope this answers your question because I'm not really sure of what you are actually asking.