Optimising a graphic and animation intese layout - javascript

I'm working on a website that uses a lot of large images and a lot of javascript.
Check it out here – http://joehamilton.info/1/1/
I've been trying to improve the performance and have had little success. I would just like to keep the frame rate smooth because sometimes it gets bogged down.
I thought it might have been code that was bogging it down but after discovering profiling in chrome it seems to the "paint" process that is slowing things down.
I'm just wondering what I could do to improve things. I'm open for any suggestions but I guess I was thinking along the lines of these things types of things:
• Will compressing the image files help?
• Would a 300px square repeating pattern image be faster to paint in a 900px square div than a 900px square image..
It's a large and complex site so I would rather not spend ages modifying things if it's not going to help.
Any expert raster image people out there?

For anything moving around you should use transitions and transforms rather than jquery animate and background-position as it will then be hardware accelerated in some browsers. It also avoids repainting so regularly. http://css3.bradshawenterprises.com/demos/speed.php is an example comparing the two techniques in an admittedly extreme case.
If you can't do that, ensure your animation uses requestAnimationFrame rather than a setTimeout loop.
That should help a lot.

Related

What pixel per foot ratio renders most efficiently in HTML5 3d-tranform based sims?

My issue is uncommon and, I think, requires some background in order to adequately phrase the question, so here goes...
I have a 3d world engine similar to Kieth Clarks', only without all the cool lighting effects. Instead I have some other immersive 3d perks that are also expensive, and chose to fake the lighting for now in order to be able to have the most complexity possible on mobile without dropping many frames noticeably.
I have an approximate visual/spacial scale ratio (not talking transform scales) of css pixels to object size. For instance, 50px is about 5 feet. Note, I am not trying to model exact dimensions in the real world, just close enough to be fairly convincing. Anyway, this means large objects are generally built from tile elements 100px^2. So a really simple 1-car garage, small hut etc. is easily modeled as a 100px cube made of divs or whatever you want. Usually I put the walls and top divs inside the bottom div, positioned so it acts as a base, eliminating the need for an extra containing element.
Now, tiles can be any 2d px ratio, but I'm using 100px^2 for big tile css classes, 66px, 50px, 20px, 10px etc. for smaller tile classes to make the markup simpler and easier to change px "scales" in "the world" through the css sheet, or later during runtime style alterations.
It takes a lot of tiles to model complex objects. Obvi, the potential drain on framerate stacks up exponentially as you add in objects, animations, especially large ones or complex shapes like cylinders or spheres (ouch) and have a populated simulation environment, especially outside or anywhere you can see a lot of things at one time.
I have completely re-written this thing three times to accommodate some major changes, using different "physical scale" ratios. Once enough complexity was achieved, it became apparent there is a huge difference in rendering efficiency when using a different ratio of pixels to feet (or meters, 3 feet being close enough). Basically it works out pixels/foot but it's important to note the basic tile size is just a little bigger than a 1-car garage door.
This difference is not as obvious as it would seem, such as more pixels more gpu. Some experiments proved to be more efficient with 100px : 10ft (what I'm using now) as opposed to 50px : 10ft, which is what I used on the first version.
The second version of the engine is a lot more efficient than the first and third, and not just for this reason. There are different effects involved, and special features that are now central to the purpose of this thing, so it is really time-consuming to play with how many pixels everything is made of.
For the engine to work properly I have to heavily account for various aspects such as css perspective, perspective-origin, altitude and distance of the carousel, and a lot of physical perspective cues that don't have directly-correlating css or DOM attributes, and don't easily translate into numbers I can just swap out using an equation. Also, changing one of these can wreak havok on proper rendering if you're not careful, of course like flashing, visual super-position, apparent z-order (not real z-order) incorrect etc., or just making the visual perspective look "wierd", even if it's more efficient then and not dropping frames, so making this kind of change is a real hassle and takes a lot of time and page reloads to get it right.
Now my question is this:
What is the ideal size ratio of pixels to feet for performance? I've even tried really small, like 1px is the size of a car and different transform scales for the stage to make it look right. I can glean a little more efficiency that way, but it throws my numbers for the physics way out of whack, even when I adjust them I'm not quite happy with how things move. Almost like if you miniaturized yourself to the size of a spider, you and things your relative size would behave a lot differently than they do now.
Should I go small and zoom in? Should I use larger image tiles and transform:scale3d the whole stage down? Or should I just use small images and tile elements? Should I use div's for the tiles, or is one of the newer semantic tags more efficient at rendering in css 3d space?
A combination of the above seems to be more efficient, but it's not a straight line. Almost like harmonics. Theres a periodicity to it. On the small to large pixels/foot ratio continuum, some smaller sizes seem to be less efficient than some larger ones.
There's a demo that tests how many cubes your browser can render efficiently, but I wonder if anyone's published results that describe different tile or cube pixel sizes with masses of cubes, or if anyone here has tried something like this.
Addendum: This is primarily geared for Chrome Android as an instant app or embedded in a webview, though hopefully Firefox catches up soon as it can't handle as many cubes. I will love that.
Couple screenshots of a test space... Note the sky is round. This is the anaglyph view, which takes the most processing. The SBS views are a lot more efficient as they use half or less of the viewport, and thereby have less to render since the view for each eye is half the width of anaglyph.
Also the overhead from mixing the anaglyph views.

Slow rasterization in Dev Tools

I'm optimising a site with some fairly simple parallax scrolling. The animated elements are on separate layers (backface-visibility:hidden) and the scripting and rendering steps seem fairly quick. However I'm seeing a lot of time spent on painting:
The actual drawing is fine but those huge hollow green bars represent rasterization in the separate compositor thread.
Here's the link
What am I doing to cause that and how can I improve it?
Okay, I can repro the hollow bars.
They are happening on the compositor thread, that's why we do them hollow. you can see it more clearly flicking to the flame chart:
Then if you recorded the timeline with the Paint checkbox checked you can see exactly what was inside each paint.
And we can then use the slider to narrow down which draw calls are the most expensive part of these big paints:
(looks like a big cliprect and then the bitmap draw)
But looking in aggregate.. it appears that you're repainting the world in every frame.
You might want to look at what's happening in each frame... especially to your layers:
HOWEVER.
After all that, it appears you can solve your issues with animating transform:translate() instead of left/top. I would also recommend adding will-change:transform to those items. This will allow the browser to move items around without repainting and you shouldn't have to reraster on each frame.
must reads:
Animations and Performance - web fundamentals
High Performance Animations - html5rocks
Cheers

Animating opacity of 1400 Raphael.js objects hurts animating performance

First off, thank you for any help. :)
JSFiddle code.
JSFiddle full screen
As you can see by the fiddle link above I am animating 1400 objects trying to create a 'twinkling effect'. As the user moves the mouse faster more hexagon shapes pop into full opacity and have varying fade out rates. The version in the fiddle fills the space with enough color but feels jerky and clumpy. If I lessen the fade_time variable amounts it is smoother but does not have enough hexagons with full opacity. The end goal is to spell words with the hexagons.
The performance in Chrome is best, less so in FireFox and IE. I tested (using Raphael's element.touchmove) in mobile safari on an iPad and it was even worse.
I'm looking for any advice on what pieces of the code could be done differently for performance gains.
I saw this answer somebody else gave that was supposed to help with performance, but I'm trying to base the amount of animating hexagons on cursor movement and I'm not sure I could do that with a timer.
This answer mentioned using canvas:
A good alternative would be using Canvas to draw the elements. From my experiments it will be faster than SVG at drawing this many although if you are using some animations, they will be harder to implement than with the RaphaelJS library.
Does that seem like a better route to people, even with the animations the code is using?
This is my first use of Raphael.js. I'm not very experienced in JS in general, so any help is wunderbar!
Edit: Edit: Also, seeing this answer about .resize being called more times than the questioner might have thought got me wondering if the .mousemove function may be called more times (more than I would need) than I would expect.
I think it chokes on "overlapped" animations, i.e. for example:
hexagon number #6 starts its fade
at a half of its fade, another fade is started
I added a stop() instruction to avoid unexpected results.
Besides, the for() cycle doesn't check if another animation is in progress, nor if some hex has been randomly selected twice or more inside the cycle.
As a workaround for this, I added a vector to cache the indexes of the hexagons being animated, although it does not seem to be of great help.
To see how many (useless) animations it saved, uncomment the last console.log().
Besides, your getRandomInt() function generated some undefined index errors (since your array indexes go from 0 to 1399 and it returned integers between 0 and 1400... I changed it.
See my add-ons here: http://jsfiddle.net/rz4yY/46/

Performance: CSS3 animations vs. HTML5 Canvas

I'm working on a webapp (will only be running in Chrome 19+) that is currently built with CSS3 transitions. More specifically, I'm using Jquery Transit to fire the CSS3 animations with Jquery itself. The reasoning here was that some of the animations are drawn out for several seconds and jquery animate wasn't smooth enough, but Transit is a great fix for this. Jquery Transit is working quite well but I'm curious to whether HTML5 Canvas will render things even smoother? And if so, is it worth pursuing, given the fact that I'm using AJAX and percentage-based locations for DIVs currently? If anyone here knows how CSS3 Animations compare to HTML5 Canvas performance in Chrome and would be willing to give their input I would greatly appreciate it!
CSS3 will give you fewer headaches and you can change it easily in the future, and it will work gracefully on systems that aren't canvas-enabled.
If you're using text, you should absolutely stick with CSS if you can get away with it. Canvas ruins the accessibility of your app and disallows users from using a carat or highlighting text or using text to speech.
If you're just making a funny sliding button or something then you should also just use CSS as it will probably be much easier to implement and maintain. Redoing CSS is easier than slogging over (what can be complex) JavaScript.
I can't honestly tell you if canvas renderings will be smoother. One plus of the canvas is that you can animate things to a seemingly larger size (while keeping the canvas the same size) without having to cause the DOM to re-layout. On most modern systems this really isn't an issue thought.
Furthermore, if its already done with CSS3, are you actually having performance problems? If nobody has complained about performance yet, why bother rewriting it for canvas? If you aren't encountering any real performance problems so far, why reinvent your app?
The problem I think you might run into with canvas is that it is bitmap based. Therefore scaling up and down after the page is initially rendered will be a problem. Furthermore, line breaks will be painful to deal with potentially. The people who write your site's content might find it challenging to insert line breaks because there is no such thing as a line break using canvas, svg, or vml. In fact you need to pre-compute line breaks. "\n" using raphael.js works, but it isn't great. Furthermore you can't use selectors to target various portions if you in your svg graphics. You may be able to using canvas, maybe.... Canvas probably has a buncha of the same gotchas.
On the image front you will have blurry images if it scales and there are less libraries out there that deal with image resizing for canvas. This may change in the future, but it will still be an ordeal to deal with. I'd just stick with your divs/css3 with jquery fallbacks for older browsers.
From a purely performance perspective, checkout the first comment on your question. It has some nice benchmarks.

Browser render engines: which strategy would be best for a huge image background

I have a project I'm planning which based on kind of an 'interactive world' style experience where the browsers viewport moves around to show many different graphic environments, it must all be fluid and no page-to-page breaks. The project is in js/html5/css3
The problem this poses is that the entire 'world' will be perhaps 8-15,000 px squared (it also rotates, and has various png alpha overlays on top of it)
I was going to run some tests but there are so many ways to approach this and I'm looking for the most fluid one. My knowledge of the internal workings of browser render engines isn't great so I thought I'd ask around.
I cant use the 'tiling' approach which google map uses as it's not fluid enough (too blocky) also when rotating around it's going to create headaches do the math-transforms to work out which tiles to load at what angles so here are the 2 choices I have boiled it down to:
(1) The "Huge" image approach
The benefit of this is that once it's loaded everything is easy, the downside is that it's going to be huge and I cannot show an incremental preloader as the image queue will essentially be 2 images (overlay and huge img)
(2) Image segments
The benefit is that I can show a preloader with an image queue at 10% increments (10x images)
Question:
is the 2nd approach going to have a more painful overhead on the browser's rendering engine due to there being 9 separate sets of calculations being done or do browser engines simply see them as one painted area once it's initially rendered and then update it as a whole? Or each time the dom is changed (rotated etc), the browser has to run the same transform/repaint process 9 times?
Thanks very much.
LOTS of tests later: result: Use a big image, seems to be less for the browser to deal with.

Categories

Resources