Slow rasterization in Dev Tools - javascript

I'm optimising a site with some fairly simple parallax scrolling. The animated elements are on separate layers (backface-visibility:hidden) and the scripting and rendering steps seem fairly quick. However I'm seeing a lot of time spent on painting:
The actual drawing is fine but those huge hollow green bars represent rasterization in the separate compositor thread.
Here's the link
What am I doing to cause that and how can I improve it?

Okay, I can repro the hollow bars.
They are happening on the compositor thread, that's why we do them hollow. you can see it more clearly flicking to the flame chart:
Then if you recorded the timeline with the Paint checkbox checked you can see exactly what was inside each paint.
And we can then use the slider to narrow down which draw calls are the most expensive part of these big paints:
(looks like a big cliprect and then the bitmap draw)
But looking in aggregate.. it appears that you're repainting the world in every frame.
You might want to look at what's happening in each frame... especially to your layers:
HOWEVER.
After all that, it appears you can solve your issues with animating transform:translate() instead of left/top. I would also recommend adding will-change:transform to those items. This will allow the browser to move items around without repainting and you shouldn't have to reraster on each frame.
must reads:
Animations and Performance - web fundamentals
High Performance Animations - html5rocks
Cheers

Related

Questions around VivaGraph WebGL based rendering

I have been using VivaGraphs for network analysis, but my knowledge is very rusty around JavaScript and concepts of SVG and WebGL in particular. I have been able to create nice networks using both SVG and WebGL and need a few pointers from you:
I feel WebGL is way faster than SVG when it comes to rendering large networks. I tried on a network with 80k edges and 20k nodes. Am I right in this assumption?
SVG is far easier to customize appearance of nodes and edges, WebGL is far too restrictive (or maybe my lack of knowledge). As in do you believe SVG gives me far more flexibility in customization?
One thing I noticed is that I need to pause my graph after some time otherwise the clusters in my graph keep on drifting. Anyway I can restrict my graph coordinates so that it never goes out of my screen size?
One major issue with WebGL I faced was that when I paused the rendered, none of my code worked (like events for node hover, click etc). But the moment I resumed it, it worked. This is not the case in SVG. My Hover,click functions on nodes work even if renderer is paused. This is a big showstopper in my case. Do you think there is a way to counter this?
Please open an issue on GitHub repository or share a link with broken webgl inputs - I'll be happy to take a look and fix a problem.
In terms of your intuition, yes, webgl is much faster, yet requires more effort to work with.

Optimising a graphic and animation intese layout

I'm working on a website that uses a lot of large images and a lot of javascript.
Check it out here – http://joehamilton.info/1/1/
I've been trying to improve the performance and have had little success. I would just like to keep the frame rate smooth because sometimes it gets bogged down.
I thought it might have been code that was bogging it down but after discovering profiling in chrome it seems to the "paint" process that is slowing things down.
I'm just wondering what I could do to improve things. I'm open for any suggestions but I guess I was thinking along the lines of these things types of things:
• Will compressing the image files help?
• Would a 300px square repeating pattern image be faster to paint in a 900px square div than a 900px square image..
It's a large and complex site so I would rather not spend ages modifying things if it's not going to help.
Any expert raster image people out there?
For anything moving around you should use transitions and transforms rather than jquery animate and background-position as it will then be hardware accelerated in some browsers. It also avoids repainting so regularly. http://css3.bradshawenterprises.com/demos/speed.php is an example comparing the two techniques in an admittedly extreme case.
If you can't do that, ensure your animation uses requestAnimationFrame rather than a setTimeout loop.
That should help a lot.

Browser render engines: which strategy would be best for a huge image background

I have a project I'm planning which based on kind of an 'interactive world' style experience where the browsers viewport moves around to show many different graphic environments, it must all be fluid and no page-to-page breaks. The project is in js/html5/css3
The problem this poses is that the entire 'world' will be perhaps 8-15,000 px squared (it also rotates, and has various png alpha overlays on top of it)
I was going to run some tests but there are so many ways to approach this and I'm looking for the most fluid one. My knowledge of the internal workings of browser render engines isn't great so I thought I'd ask around.
I cant use the 'tiling' approach which google map uses as it's not fluid enough (too blocky) also when rotating around it's going to create headaches do the math-transforms to work out which tiles to load at what angles so here are the 2 choices I have boiled it down to:
(1) The "Huge" image approach
The benefit of this is that once it's loaded everything is easy, the downside is that it's going to be huge and I cannot show an incremental preloader as the image queue will essentially be 2 images (overlay and huge img)
(2) Image segments
The benefit is that I can show a preloader with an image queue at 10% increments (10x images)
Question:
is the 2nd approach going to have a more painful overhead on the browser's rendering engine due to there being 9 separate sets of calculations being done or do browser engines simply see them as one painted area once it's initially rendered and then update it as a whole? Or each time the dom is changed (rotated etc), the browser has to run the same transform/repaint process 9 times?
Thanks very much.
LOTS of tests later: result: Use a big image, seems to be less for the browser to deal with.

Most efficient way to draw particles in HTML5 on iPad 2

I'm trying to create moving lights with trails for an HTML5 website/app targeted at iPad 2.
I wonder what the best way to do this is and whether using HTML5 is viable at all. I chose HTML5 because it's easier and cheaper to develop and deploy than native iOS apps with Objective C. Of course if it turns out that HTML5 simply doesn't offer enough performance I might have to swallow the bitter pill.
Anyway to give you an impression what I'm talking about, this is what I got so far:
screenshot http://devdali.no-ip.org/mathias/test-lights/screenshots/1.jpg
Or you can see it in action here (only works in webkit based browsers).
At first I tried using HTML5 canvas and drawing radial gradients as particles in similar manner you see above. It worked but the framerate was horrible even on my desktop computer!
So after a bit of reading I found out that CSS3 transforms may be hardware accelerated, so I build the version you see above. Every "particle" is a 64x64 png image. For each light there is the "head" light (one img) followed by a trail consisting of 115 img elements. Each img element is transformed using "translate3d" (as well as scale and rotation). Also the opacity of each element is adjusted dynamically.
Doing it this way provided much better framerates on my computer, but I doubt the iPad 2 will handle it.
I'd be grateful if anyone could give me some hints on how to improve the performance of this in general and considering the target platform.
Thanks for any help in advance!
If you accept small changes to the effect, some other procedure may work fast:
Instead of drawing the light's trails by the means of many particles, just draw the lights in their current positions in a Canvas element.
You can then darken the whole image at the beginning of a frame by filling a black rectangle with a very low opacity on top. This way the trails fade into dark, but would not alter their color like they do now.
The amount of drawing operations however will reduce vastly. The most costly operation would be filling the fading rectangle for every frame.
This should be built in the canvas. Check out EaselJS and this demo.
http://easeljs.com/
http://easeljs.com/demos/MusicVisualizer/index.html
You could optimize performances a LOT by using WebGL(, which is supported on the iPad2.)... which is not supported for basic html pages on ios safari as stated Nison Maël...
For the time being you only have canvas as a solution. Which will still give you better performances...
(You can check this blog for more info:
http://learningwebgl.com/blog/
With a little faith and time you'll be amazed!)

HTML5 canvas performance on small vs. large files

I seem to be experiencing varying performance using an HTML5 canvas based on the memory size of the page... perhaps the number of images (off-screen canvases) that are loaded. How to I best locate the source of the performance problem? Or does anyone know if in fact there is a performance issue when there's a lot of data loaded, even if it isn't all being used at once?
Here's an example of good performance. I have a relatively simple map. It's between 700 and 800 KB. You can drag to scroll around this map relatively smoothly.
There's another file (which you may not want to look at due to its large size).
It's about 16 MB and contains dozens, maybe on the order of a hundred images/canvases. It draws a smaller view so it should go faster. But it doesn't. Many of the maps lag quite severely compared to the simpler demo.
I could try to instrument the code to start timing things, but I have not done this in JavaScript before, and could use some help. If there are easier ways to locate the source of performance problems, I'd be interested.
In Google Chrome and Chromium, you can open the developer tools (tools->developer tools) and then click on "Profiles". Press the circle at the bottom, let the canvas redraw and then click on the circle again. This gives you a profile that shows how much time was spent where.
I've been working on some complex canvas stuff where rendering performance mattered to me.
I wrote some test cases in jsperf and came to the conclusion that a rule of thumb is that a source offscreen canvas should never be more than 65536 pixels.
I haven't yet come to a conclusion about why this is, but likely a data structure or data type has to be changed when dealing with large source canvases.
putImageData showed similar results.
destination canvas size didn't seem to matter.
Here are some tests I wrote that explore this performance limitation:
http://jsperf.com/magic-canvas/2
http://jsperf.com/pixel-count-matters/2

Categories

Resources