Our library ApiNATOMY renders a tree-map to navigate a hierarchical space (like human anatomy, for instance). We used to use nested <div> elements for every tile, but the re-flow process was a constant source of slowdown. We've now switched to pure WebGL (Three.js), and got a remarkable performance boost.
However, this did lose us the convenience of HTML-like markup. It now requires 10-20 lines of code to draw a box, and another 10-20 to render some text. Having things stay centered when the box is re-sized is also a manual job, and I don't even dare dream of automatic line-breaks. Then there's the problem of THREE.FontUtils being an incredibly slow way to render text. I've found a way to do it with a separate canvas, but that also requires a lot of manual code, and is more inflexible w.r.t. sizing.
So my question is: Is there some library of utility classes/functions to make these sorts of 2D jobs in Three.js easier and more robust? Alternatively, any general tips or references?
(Note: There will still be 3D aspects to ApiNATOMY, so a pure 2D renderer is not an option.)
Related
My issue is uncommon and, I think, requires some background in order to adequately phrase the question, so here goes...
I have a 3d world engine similar to Kieth Clarks', only without all the cool lighting effects. Instead I have some other immersive 3d perks that are also expensive, and chose to fake the lighting for now in order to be able to have the most complexity possible on mobile without dropping many frames noticeably.
I have an approximate visual/spacial scale ratio (not talking transform scales) of css pixels to object size. For instance, 50px is about 5 feet. Note, I am not trying to model exact dimensions in the real world, just close enough to be fairly convincing. Anyway, this means large objects are generally built from tile elements 100px^2. So a really simple 1-car garage, small hut etc. is easily modeled as a 100px cube made of divs or whatever you want. Usually I put the walls and top divs inside the bottom div, positioned so it acts as a base, eliminating the need for an extra containing element.
Now, tiles can be any 2d px ratio, but I'm using 100px^2 for big tile css classes, 66px, 50px, 20px, 10px etc. for smaller tile classes to make the markup simpler and easier to change px "scales" in "the world" through the css sheet, or later during runtime style alterations.
It takes a lot of tiles to model complex objects. Obvi, the potential drain on framerate stacks up exponentially as you add in objects, animations, especially large ones or complex shapes like cylinders or spheres (ouch) and have a populated simulation environment, especially outside or anywhere you can see a lot of things at one time.
I have completely re-written this thing three times to accommodate some major changes, using different "physical scale" ratios. Once enough complexity was achieved, it became apparent there is a huge difference in rendering efficiency when using a different ratio of pixels to feet (or meters, 3 feet being close enough). Basically it works out pixels/foot but it's important to note the basic tile size is just a little bigger than a 1-car garage door.
This difference is not as obvious as it would seem, such as more pixels more gpu. Some experiments proved to be more efficient with 100px : 10ft (what I'm using now) as opposed to 50px : 10ft, which is what I used on the first version.
The second version of the engine is a lot more efficient than the first and third, and not just for this reason. There are different effects involved, and special features that are now central to the purpose of this thing, so it is really time-consuming to play with how many pixels everything is made of.
For the engine to work properly I have to heavily account for various aspects such as css perspective, perspective-origin, altitude and distance of the carousel, and a lot of physical perspective cues that don't have directly-correlating css or DOM attributes, and don't easily translate into numbers I can just swap out using an equation. Also, changing one of these can wreak havok on proper rendering if you're not careful, of course like flashing, visual super-position, apparent z-order (not real z-order) incorrect etc., or just making the visual perspective look "wierd", even if it's more efficient then and not dropping frames, so making this kind of change is a real hassle and takes a lot of time and page reloads to get it right.
Now my question is this:
What is the ideal size ratio of pixels to feet for performance? I've even tried really small, like 1px is the size of a car and different transform scales for the stage to make it look right. I can glean a little more efficiency that way, but it throws my numbers for the physics way out of whack, even when I adjust them I'm not quite happy with how things move. Almost like if you miniaturized yourself to the size of a spider, you and things your relative size would behave a lot differently than they do now.
Should I go small and zoom in? Should I use larger image tiles and transform:scale3d the whole stage down? Or should I just use small images and tile elements? Should I use div's for the tiles, or is one of the newer semantic tags more efficient at rendering in css 3d space?
A combination of the above seems to be more efficient, but it's not a straight line. Almost like harmonics. Theres a periodicity to it. On the small to large pixels/foot ratio continuum, some smaller sizes seem to be less efficient than some larger ones.
There's a demo that tests how many cubes your browser can render efficiently, but I wonder if anyone's published results that describe different tile or cube pixel sizes with masses of cubes, or if anyone here has tried something like this.
Addendum: This is primarily geared for Chrome Android as an instant app or embedded in a webview, though hopefully Firefox catches up soon as it can't handle as many cubes. I will love that.
Couple screenshots of a test space... Note the sky is round. This is the anaglyph view, which takes the most processing. The SBS views are a lot more efficient as they use half or less of the viewport, and thereby have less to render since the view for each eye is half the width of anaglyph.
Also the overhead from mixing the anaglyph views.
I'm building a web app based on javascript/jQuery and php, and I need to render and manage (e.g. have the user drag/drop, resize, etc) a large number (20,000+) of drawn objects on a web page and am looking for a recommendation in terms of approach/libraries to use, mainly to get an idea of how to do this whilst keeping page performance acceptable.
The objects are simple geometric shapes (rectangles, circles, squares, etc) that I will need to attach event handlers to and be able to move/re-size. Shape attributes will be based on properties of javascript objects and I'll need to change the shapes based on the javascript object properties and vice versa.
The canvas area is likely to be quite large (not sure if this will affect performance?) although not all objects will be 'visible' on the page, but must be able to scroll within a div (using overflow, etc) around the full canvas. I have built something for test purposes using jQuery SVGDOM which works well when I have a couple of hundred objects, but the page grinds to a halt when I go over 1000 objects.
What I like about svgdom is the way it fits nicely with jQuery for referencing the DOM objects (for event handlers, etc), but am willing to (try to) develop more complex code if I need to in order to be able to address the larger number of objects that svgdom doesn't seem happy with from a performance perspective.
Any suggestions for how to do this?
I think you need to look into webGL, which renders using the GPU. A good framework for that is three.js.
Still, to manage your expectations: making 20k objects interactive really a big challenge and might need some smart caching system to fake it. if you target mobile devices too, I would say your goal is way too ambigious. I am sometimes already happy if 100 objects run/move smooth.
I'm taking as the answer to my original question that it is not practical to display/manage the number of objects that I need on a single page whether SVG or directly to the canvas.
So my approach must be to reduce the number of objects displayed at any given time - now I just need to figure out what the best way to do this is...
I am working on a javascript canvas game and I would like to improve the performance of the game. I am reading some articles about how to achieve better performance - one technique being pre-rendering.
Does it make sense to render every object, each of which has a texture, to it's own separate canvas element? Here is an example of an entity I am rendering:
fruitless.ctx.save();
fruitless.ctx.translate(this.body.GetPosition().x,this.body.GetPosition().y);
fruitless.ctx.rotate(this.body.GetAngle());
fruitless.ctx.scale(this.scale.x, this.scale.y);
fruitless.ctx.drawImage(this.texture, ... )
this.face.draw();
fruitless.ctx.restore();
So essentially I am running the drawImage() function each iteration... Pre-rendering suggests this drawImage() should be done in the initialisation (just once) - is that right?
Hard to give specific recommendations without knowing more...but here's a start:
Put any static background elements in an html image and lay that image down first. Scroll the background image if it is static but larger than your game viewport.
Sort animated elements by when they need to animate into several groups. So sun and cloud elements that animate on frame#5 will be one group. A grape-man and raison-man that animate every frame will be in a different group. Create a canvas for each of these several groups.
Put infrequently animated elements on a sprite-sheet.
Put frequently animated elements in their own image object.
Put frequently re-textured elements in their own offscreen canvas and re-texture there. Here's the trade: canvas's operate poorly on mobile, so you don't want a lot of canvases on mobile. But pre-rendering all variations of textures into image objects takes up a lot of memory.
Bottom line:
Pre-rendering will undoubtedly give you a performance boost.
But you need to test various pre-rendering strategies to see which works best on which device
To answer this question:
Does it make sense to render every object, each of which has a texture, to it's own separate canvas element? Here is an example of an entity I am rendering:
Well, that depends. How many are there? and what are they doing?
If they're all whizzing around all the time, then you might as well keep them all on the same canvas element as, regardless, this will be consistently updated.
If some are static, group them.
Your goal is to do as few drawImage calls as possible as this is fairly expensive.
Also, talking broadly, it's a good idea to leave the micro optimisations till the end.
I am building a web application which relies on svg heavily. For the reference, I am using raphael js library to deal with all of it.
In this particular case I have implemented something that mimics a scrollbar and moves bunch of svg features (~500 elements) across the screen. Part of those features are <text> (~100) elements. Other elements include <rect>, <image> and <path> elements.
So, I noticed that my application is not really very snappy on my laptop, and is down right annoying to deal with on an ipad, due to speed. However, whenever text elements are removed or ignored during scrolling, it immediately gets up to decent speed.
I tried doing some speed tests (very crude ones, using new Date().getTime()) and discovered that it takes ~10 ms to move all the elements except for <text> elements, however it takes ~120 ms when <text> elements are included.
I believe this happens because each letter is rendered as a vector shape and it takes loads of processing power to calculate what exactly is obstructed by such a complex structure.
Is it possible to just embed the text, so the text is rendered as raster graphic, and not as shapes? Or improve performance of rendering text in any other way?
I do not need background transparency, and I do not use any fancy fonts.
You can prerender the text using Canvas and embed images into the SVG. I don't know how this compares to text element rendering in general, but for our demos this works quite well (see the drop shadow in the "hierarchy" example - they are rendered into canvas first and then replicated and referenced from within the SVG).
Note that these demos also make heavy use of virtualization, i.e. if you zoom into the image and only some of the elements are actually inside the viewport, the others are removed from the SVG, which gives a huge speedup.
The demos do a lot more than just moving the elements around, so it should be easy to get the same or even better performance.
I don't know how to do this with raphael, though, but I believe you should be able to put the data url from the canvas image into the SVG with raphael, too.
Paper.print() according to the Raphael site
Creates path that represent given text written using given font at given position with given size
Essentially your text is converted to a path. Obviously this has performance issues.
Probably best to stick to using Paper.text()
UPDATE
So not content with just dishing out advice I have set up some tests on http://www.jsperf.com. They can be used to compare the differences in performance to animate and transform different types of Raphael objects.
If you run these on your iPad it should show if text elements are really much slower to move. One other thing to note is that, at least in the tests I ran, paper.print() and paper.text() were not that different in terms of performance.
Run the tests on jsperf
I am in process of making a game where the health bar (animated) and some other info represented visually like some icons showing the number of bombs the player has etc. Now, this can be done both in canvas (by making another canvas for info that sits over the main canvas, or it can be done using many divs and spans with absolute positioning. This is my first time in making a browser based game so if any experienced people view this, tell me what you recommend. I would like to know that which method would be faster.
The game will also be running on mobile devices. Thanks!
There is no straighforward answer and I suggest you do FPS testing with different browser how it plays out for your use case. If you do not wish to go such in-depth I suggest you simply draw the elements inside canvas and if you need to hide them then leave out drawHUD() call from your rendering loop.
For HTML HUD overlay on <canvas> the following factors should be considered
Can the web browser compositor do hardware accelerated <canvas> properly if there are DOM elements upon the canvas
HTML / DOM manipulation will be always slower than <canvas> operations due to inherited complexity dealing with DOM elements
<canvas> pixel space stays inside <canvas> and it might be difficult to have pixel-perfect aligment if you try to draw elements on <canvas> outside the canvas itself
HTML offers much more formatting options for text than canvas drawString() - is HTML formatting necessary
Use the canvas. Use two canvases if you want, one overlaid over the other, but use the canvas.
Touching the DOM at all is slow. Making the document redo its layout because the size of DOM elements moved is very slow. Dealing with the canceling (or not) of even more events because there are DOM items physically on top of the canvas can be a pain and why bother dealing with that?
If your HUD does not update very often then the fastest thing to do would be drawing it to an in-memory canvas when it changes, and then always drawing that canvas to the main canvas when you update the frame. In that way your drawHud method will look exactly like this:
function drawHUD() {
// This is what gets called every frame
// one call to drawImage = simple and fast
ctx.drawImage(inMemoryCanvas, 0, 0);
}
and of course updating the HUD information would be like:
function updateHUD() {
// This is only called if information in the HUD changes
inMemCtx.clearRect(0, 0, width, height);
inMemCtx.fillRect(blah);
inMemCtx.drawImage(SomeHudImage, x, y);
var textToDraw = "Actually text is really slow and if there's" +
"often repeated lines of text in your game you should be" +
"caching them to images instead";
inMemCtx.fillText(textToDraw, x, y);
}
Since HUDs often contain text I really do urge caching it if you're using any. More on text performance here.
As others have said, there is no universally best approach, as it depends on the specifics of what you need to render, how often, and possibly what messaging needs to happen between graphical components.
While it is true the DOM reflows are expensive, this blanket warning is not always applicable. For instance, using position:fixed; elements avoids triggering reflows for the page (not necessarily within the element if there are non-fixed children). Repaint is (correct me if this is wrong) expensive because it is pixel pushing, and so is not intrinsically slower than pushing the same number of pixels to a canvas. It can be faster for some things. What's more, each has certain operations that have performance advantages over the other.
Here are some points to consider:
It's increasingly possible to use WebGL-accelerated canvas elements on many A-grade browsers. This works fine for 2D, with the advantage that drawing operations are sent to the GPU, which is MUCH faster than the 2D context. However this may not be available on some target platforms (e.g., at the time of this writing, it is available in iOS Safari but not in the iOS UIWebView used if you target hybrid mobile applications.) Using a library to wrap canvas can abstract this and use WebGL if its available. Take a look at pixi.js.
Conversely, the DOM has CSS3 animations/transitions which are typically hardware-accelerated by the GPU automatically (with no reliance on WebGL). Depending on the type of animation, you can often get much faster results this way than with canvas, and often with simpler code.
Ultimately, as a rule in software performance, understanding the algorithms used is critical. That is, regardless of which approach used, how are you scheduling animation frames? Have you looked in a profiler to see what things take the most time? This practice is excellent for understanding what is impacting performance.
I've been working on an app with multiple animations, and have implemented each component both as DOM and canvas. I was initially surprised that the DOM version was higher performant than the canvas (wrapped with KineticJS) version, though I know see that this was because all the animated elements were position:fixed and using CSS (under the hood via jQuery UI), thereby getting GPU performance. However the code to manage these elements felt clunky (in my case, ymmv). Using a canvas approach allows more pixel-perfect rendering, but then it loses the ability to style with CSS (which technically allows pixel-perfect rendering as well but may be more or less complex to achieve).
I achieved a big speed up by throttling the most complex animation to a lower framerate, which for my case is indistinguishable from the 60fps version but runs smooth as butter on an older iPad 2. Throttling required using requestAnimationFrame and clamping calls to be no more often than the desired framerate. This would be hard to do with CSS animations on the DOM (though again, these are intrinsically faster for many things). The next thing I'm looking at is syncing multiple canvas-based components to the same requestAnimationFrame loop (possibly independently throttled, or a round-robin approach where each component gets a set fraction of the framerate, which may work okay for 2-3 elements. (Incidentally, I have some GUI controls like sliders that are not locked to any framerate as they are should be as close to 60fps as possible and are small/simple enough that I haven't seen performance issues with them).
I also achieved a huge speed boost by profiling and seeing that one class in my code that had nothing to do with the GUI was having a specific method called very often to calculate a property. The class in question was immutable, so I changed the method to memoize the value and saw the CPU usage drop in half. Thanks Chrome DevTools and the flame chart! Always profile.
Most of the time, the number of pixels being updated will tend to be the biggest bottleneck, though if you can do it on the GPU you have effectively regained all the CPU for your code. DOM reflows should be avoided, but this does not mean avoid the DOM. Some elements are far simpler to render using the DOM (e.g. text!) and may be optimized by the browser's (or OS's) native code more than canvas. Finally, if you can get acceptable performance for a given component using either approach (DOM or canvas), use the one that makes the code simplest for managing that type of component.
Best advice is to try isolated portions in the different approaches, run with a profiler, use techniques to over-draw or otherwise push the limits to see which approach can run fastest, and do NOT optimize before you have to. The caveat to this rule is the question you are asking: how do I know in advance which technical approach is going to allow the best performance? If you pick one based on assuming the answer, you are basically prematurely optimizing and will live with the arbitrary pain this causes. If instead you are picking by rapid prototyping or (even better) controlled experiments that focus on the needs of your application, you are doing R&D :)
Browserquest displays their HUD using HTML elements, which has the benefit that you don't have to worry about redrawing etc. (and the performance will be pretty good, given that the entire browser engine is optimized to render the DOM pretty fast.
They (browserquest) also use several layered canvas elements for different game elements. I don't know the exact structure, but I guess that on which canvas an element is displayed depends on how often it needs to be redrawn.