Show loading indication when drawing large canvas - javascript

I'm rendering a canvas with a lot of elements. Because the high amount of elements and the complex shapes of the elements the elements are not displayed instantly, but are displayed with a delay (sometimes more than 10 seconds). During this time the whole application freezes and no loading indication is shown. Is it possible to somehow show a loading indication whenever the canvas is not completely rendered yet?

In really depends on the app itself, drawings, and general architecture of your app.
That task can be very complex in some situations. There are some options:
1. Use a webworker
That is probably the best approach. You need to run a webworker and make all drawings into an offscreen canvas. It will be simple if you can use 2d native API for all drawings. If you are using Konva, take a look here: https://konvajs.org/docs/sandbox/Web_Worker.html#page-title
I don't know any ways to run a react-konva application inside a webworker
2. Use incremental drawing
You need to check where the slowness comes from. From rendering? From creating too many objects? If the js thread is spending too much time to just initialize the nodes, you can create them in steps. Like create 100 nodes -> draw canvas -> wait a bit -> create 100 nodes more -> etc. That way the drawing will slowly appear on the screen and the UI probably will not be frozen.
3. Optimizing drawings
There are many tips here: https://konvajs.org/docs/performance/All_Performance_Tips.html#page-title

Related

Multiple cursors in a web app - how to display them?

I'm working on an app for scheduling projects. One of its main features should be displaying currently logged users' cursors in realtime. Like in Figma:
On the backend, I'm going to use Firebase Realtime Database, where I'm gonna store mouse cursor coordinates. But, I've got a problem with the frontend part - I'm wondering what will be the best approach when it comes to the way of displaying them?
The most common solution is to use html canvas, but I'm afraid that this will mean, that I'll have to totally rebuild my app frontend ;)
So maybe just some small divs / svg elements representing other users' cursors? With this solution, I'm afraid that cursors will cover interface elements, so it will not be possible to click on those elements. Maybe it will require playing with z-index?
Please let me know, what in your opinion will be the best approach.
Canvas is the best option here. You can use it as a layer above the rest of the page and as mentioned in the comments set pointer-events :none; to make sure it doesn't interfer with the other page functionalities.
Changing multiple SVGs positions in high frequency sounds inefficient performance wise, it will send the render process to reflow stage.
Here is a great explanation of the browser render process
Here you can find the following explanation:
To ensure smooth scrolling and animation, everything occupying the main thread, including calculating styles, along with reflow and paint, must take the browser less than 16.67ms to accomplish. At 2048 X 1536, the iPad has over 3,145,000 pixels to be painted to the screen. That is a lot of pixels that have to be painted very quickly. To ensure repainting can be done even faster than the initial paint, the drawing to the screen is generally broken down into several layers. If this occurs, then compositing is necessary.
Painting can break the elements in the layout tree into layers. Promoting content into layers on the GPU (instead of the main thread on the CPU) improves paint and repaint performance. There are specific properties and elements that instantiate a layer, including and , and any element which has the CSS properties of opacity, a 3D transform, will-change, and a few others. These nodes will be painted onto their own layer, along with their descendants, unless a descendant necessitates its own layer for one (or more) of the above reasons.
Layers do improve performance, but are expensive when it comes to memory management, so should not be overused as part of web performance optimization strategies.
(Meaning that you can create a layer for SVGs, but you'll have to use transform to move them around instead of top/left)
Best of luck with your project:)

WebGL rendering a lot of objects - performance issue

I'm working with ChemDoodle library for rendering complex chemical structures. Everything works fine, but with big molecules (about 20k atoms) it's working quite slow. I don't have much experience in graphics, but I think it might me because each atom is rendering independently - every time it's rerendering, it has to iterate over array of atoms (it should be buffered).
My idea was to create some structure, that would be calculated on init time and when render, only camera would change it's position. I don't need to manipulate with atoms, only use mouse to rotate/move whole molecule. Is something like this even possible and would it improve performance?
I would appreciate info if it's possible (or some other suggestions), ideally in pure WebGL, without ThreeJS..

HTML5 canvas performance enhancements

I am working on a javascript canvas game and I would like to improve the performance of the game. I am reading some articles about how to achieve better performance - one technique being pre-rendering.
Does it make sense to render every object, each of which has a texture, to it's own separate canvas element? Here is an example of an entity I am rendering:
fruitless.ctx.save();
fruitless.ctx.translate(this.body.GetPosition().x,this.body.GetPosition().y);
fruitless.ctx.rotate(this.body.GetAngle());
fruitless.ctx.scale(this.scale.x, this.scale.y);
fruitless.ctx.drawImage(this.texture, ... )
this.face.draw();
fruitless.ctx.restore();
So essentially I am running the drawImage() function each iteration... Pre-rendering suggests this drawImage() should be done in the initialisation (just once) - is that right?
Hard to give specific recommendations without knowing more...but here's a start:
Put any static background elements in an html image and lay that image down first. Scroll the background image if it is static but larger than your game viewport.
Sort animated elements by when they need to animate into several groups. So sun and cloud elements that animate on frame#5 will be one group. A grape-man and raison-man that animate every frame will be in a different group. Create a canvas for each of these several groups.
Put infrequently animated elements on a sprite-sheet.
Put frequently animated elements in their own image object.
Put frequently re-textured elements in their own offscreen canvas and re-texture there. Here's the trade: canvas's operate poorly on mobile, so you don't want a lot of canvases on mobile. But pre-rendering all variations of textures into image objects takes up a lot of memory.
Bottom line:
Pre-rendering will undoubtedly give you a performance boost.
But you need to test various pre-rendering strategies to see which works best on which device
To answer this question:
Does it make sense to render every object, each of which has a texture, to it's own separate canvas element? Here is an example of an entity I am rendering:
Well, that depends. How many are there? and what are they doing?
If they're all whizzing around all the time, then you might as well keep them all on the same canvas element as, regardless, this will be consistently updated.
If some are static, group them.
Your goal is to do as few drawImage calls as possible as this is fairly expensive.
Also, talking broadly, it's a good idea to leave the micro optimisations till the end.

How much time does drawing out of the canvas cost?

I know that one of the most expensive operations in HTML5 gamedev is drawing on the canvas. But, what about drawing images outside of it? How expensive is that? What exactly happens when the canvas is 100 by 100 pixels and I try to draw an image at (1000, 1000)? Would checking sprite coordinates to make sure it is inside the canvas make rendering more efficient?
In these tests I used Google Chrome version 21.0.1180.57.
I've made a small fiddle that tests this situation... You can check it out here: http://jsfiddle.net/Yannbane/Tnahv/.
I've ran the tests 1000000 times, and this is the data I got:
Rendering the image inside the canvas lasted 2399 milliseconds.
Rendering the image outside the canvas lasted 888 milliseconds.
This means that drawing outside the canvas does take some time, roughly, 37% of time it would take to render it inside.
Conclusion: It's better to check if the image is inside the canvas before rendering it.
But, of course, I wanted to know how much better... So, I did another test. This time, I, of course, implemented boundary checking, and got that it only took 3 milliseconds to "render" the image outside the canvas 1000000 times. That's 29600% better than simply rendering it outside.
You can see those tests here: http://jsfiddle.net/Yannbane/PVZnz/3/.
You need to perform this check yourself and skip drawing if a figure is out of the screen.
That being said, some browsers do optimize this in some conditions. I found out while writing an article on the IE9 performance profiler a while back that IE9 will optimize away drawing an image if it is out of bounds. The transformation matrix may have to be identity for this optimization to work, and either way you shouldn't rely on browsers doing it.
Always always check.
edit: You can run this simple test to see: http://jsperf.com/on-screen-vs-off
It looks like Chrome and Safari certainly optimize it, at least in simple cases, and firefox doesn't really

Browser render engines: which strategy would be best for a huge image background

I have a project I'm planning which based on kind of an 'interactive world' style experience where the browsers viewport moves around to show many different graphic environments, it must all be fluid and no page-to-page breaks. The project is in js/html5/css3
The problem this poses is that the entire 'world' will be perhaps 8-15,000 px squared (it also rotates, and has various png alpha overlays on top of it)
I was going to run some tests but there are so many ways to approach this and I'm looking for the most fluid one. My knowledge of the internal workings of browser render engines isn't great so I thought I'd ask around.
I cant use the 'tiling' approach which google map uses as it's not fluid enough (too blocky) also when rotating around it's going to create headaches do the math-transforms to work out which tiles to load at what angles so here are the 2 choices I have boiled it down to:
(1) The "Huge" image approach
The benefit of this is that once it's loaded everything is easy, the downside is that it's going to be huge and I cannot show an incremental preloader as the image queue will essentially be 2 images (overlay and huge img)
(2) Image segments
The benefit is that I can show a preloader with an image queue at 10% increments (10x images)
Question:
is the 2nd approach going to have a more painful overhead on the browser's rendering engine due to there being 9 separate sets of calculations being done or do browser engines simply see them as one painted area once it's initially rendered and then update it as a whole? Or each time the dom is changed (rotated etc), the browser has to run the same transform/repaint process 9 times?
Thanks very much.
LOTS of tests later: result: Use a big image, seems to be less for the browser to deal with.

Categories

Resources