After a short discussion with a friend on a Canvas project, we realized that there is no clear cut answer on whether the Canvas rotate or translate functions are better to use. Mainly we want to know which is the best for rendering performance and why.
PS The save and restore method for rotating images across the net is terrible...
Since they both boil down to matrix transforms, and "translate" is performed as merely an offset, presumably the translation would yield better performance in general than most types of rotation because there are simply more operations involved with a rotation than translation. See Transformation matrix, Translation matrix and Rotation matrix for some background.
But since the two aren't interchangeable in any way, shape or form, I fail to see the fruit of the debate.
Also, I feel that I should add since there is a general movement toward hardware acceleration for canvas related features in general, that the cost of any transformation may end up as a constant (eg. no actual cost) due to co-processor parallelism in the rendering pipeline...
Related
This may be a bit of a naive question so please go easy on me. But I was looking at shaders at shadertoy.com and I'm amazed at how small the glsl code is for the 3d scenes. Digging deeper I noticed how most of the shaders use a technique called ray marching.
This technique makes it possible to avoid using vertices/triangles altogether and just employ the pixel shader and some math to create some pretty complex scenes.
So I was wondering why is it that 3d scenes often use triangle meshes with webgl instead of just using pixel shaders. Can't we just render the entire scene with glsl and pixel shaders (aka fragment shaders)?
The simple answer is because the techniques on shadertoy are probably 10,100,1000 times slower than using vertices and triangles.
Compare this shadertoy forest that runs at 1fps at best fullscreen on my laptop
https://www.shadertoy.com/view/4ttSWf
To this Skyrim forest which runs at 30 to 60fps
https://www.youtube.com/watch?v=PjqsYzBrP-M
Compare this Shadertoy city which runs at 5fps on my laptop
https://www.shadertoy.com/view/XtsSWs
To this Cities:Skylines city which runs at 60fps
https://www.youtube.com/watch?v=0gI2N10QyRA
Compare this Shadertoy Journey clone which runs 1fps fullscreen on my laptop
https://www.shadertoy.com/view/ldlcRf
to the actual Journey game on PS3, a machine with an arguably slower GPU than my laptop given the PS3 came out in 2006, and yet runs at 60fps
https://www.youtube.com/watch?v=61DZC-60x20#t=0m46s
There's plenty of other reasons. A typical 3D world uses gigabytes of data for textures, characters, animations, collisions etc, none of that is available in just GLSL. Another is often they use fractal techniques so there's no easy way to actually design anything. Instead they just search the math for something interesting. That would not be a good way to design game levels for example. In other words using data of vertices makes things far more flexible and editable.
Compare the Journey examples above. The Shadertoy example is a single scene vs the game which is a vast designed world with buildings and ruins and puzzles etc...
There's a reason it's called ShaderTOY. It's a meant as a fun challenge. Given a single function who's only input is which pixel is currently being drawn, write code to draw something. As such the images people have managed to draw given that limit are amazing!
But, they aren't generally the techniques used to write real apps. If you want your app to run fast and be flexible you use the more traditional techniques of vertices and triangles. The techniques used by GTA5, Red Dead Redemption 2, Call of Duty, Apex Legends, Fortnite, etc....
I´m searching for a (or more) best practice(s) for the following problem. I´ll try to describe it as abstract as possible, so the solution can be applied to scenarios i have not yet thought of.
Data available: Voxels (Volumetric Pixels), forming a cube, with coordinates x,y,z and a color attached.
Goal: Use OpenGL to display this data, as you move through it from different sides.
Question: Whats the best practice to render those voxels, depending on the viewpoint? How (which type of Object) can store the data?
Consider the following:
The cube of data can be considered as z layers of x y data. It should
be possible to view, in-between-layers, then the displayed color
should be interpolated from the closest matching voxels.
For my application, i have data sets of (x,y,z)=(512,512,128) and
more, containing medical data (scans of hearts, brains, ...).
What i´ve tried so far:
Evaluated different frameworks (PIXI.js, three.js) and worked through a few WebGL tutorials.
If something is not yet clear enough, please ask.
There are 2 major ways to represent / render 3D datasets. Rasterization and Ray-tracing.
One fair rasterization approach is a surface reconstruction technique by the use of algorithms such as Marching Cubes, Dual Contouring or Dual Marching Cubes.
Three.js have a Marching Cubes implementation in the examples section. You basically create polygons from your voxels for classical rasterization. It may be faster than it seems. Depending the level of detail you want to reach, the process can be fast enough to be done more than 60 times per second, for thousands of vertices.
Although, unless you want to simply represent cubes (I doubt) instead of a surface, you will also need more info associated to each of your voxels rather than only voxel positions and colors.
The other way is raycasting. Unless you find a really efficient raycasting algorithm, you will have serious performance hit with a naive implementation.
You can try to cast rays from your camera position through your data structure, find / stop marching through when you reach a surface and project your intersection point back to screen space with the desired color.
You may draw the resulting pixel in a texture buffer to map it on a full-screen quad with a simple shader.
In both cases, you need more information than just colors and cubes. For example, you need at least density values at each corners of your voxels for Marching cubes or intersection normals along voxels edges (hermite data) for Dual Contouring.
The same for ray-casting, you need at least some density information to figure out where the surface lies or not.
One of the keys is also in how you organize the data in your structure specially for out-of-core accesses.
I'm trying to implement a 2D version of the marching cubes algorithm (marching squares?), and one of the major roadblocks I've run into is the performance issues (using WebGL and three.js). I notice that there's a huge tradeoff between quality (voxel/square size) and performance, and I think that the culprit for this is the center (solid area) of the metaballs:
Obviously I don't care about the faces on the inside of the metaballs since that's a completely solid area anyways; but I'm not sure how to get around polyganizing the interior area without treating it the same as the rest of the surface. The problem becomes worse when I add more metaballs to the mix.
How can I get around this problem, to maintain a decent quality and be able to render many metaballs at a decent framerate?
If you are implementing the standard marching squares technique then the cases inside and outside the surface shouldn't be a problem. In fact they are the cheapest because you don't need to do any computation for them.
If you want to reduce the poly-count in areas where it is not needed (the central area of the circle), you need to look into using an adaptive sampling technique. In this case probably most adequate would be a quad-tree (2d octree).
The speed issue when decreasing the cell size would always be there because Marching Cubes is a O(n^3) algorithm (very slow), thus marching squares would be O(n^2) (still very slow). There is no way around that. (Using an adaptive sampling data-structure, as mentioned above, would speed things up.)
It seems to me you could improve on the quality at the lower resolution. The circle seems to be aliasing a lot (assuming this is not because it is actual low screen resolution). I would check again how you interpolate on the edges of the squares (i hope you don't just use the centres of the edges) - using a more appropriate interpolation will give you better approximation and you would get better results at lower resolution.
See Paul Bourke's article on marching cubes and checkout the interpolation if you are not doing it.
Here are some references for 3d isosurface extraction techniques, (mostly based on MC) but you could benefit from them, in your 2d case:
(Kazdan et al, 2007)
(Manson and Shaefer, 2010)
(Wilhelms and Gleder, 1992)
PS: also check out their references for many more similar and maybe foundation papers!
I am in the middle of process of making a HTML5 game. Now I have a problem related to performance.
I have some irregular Shapes. So detecting if {x,y} is in the Shape or not using a custom function is near impossible(At least as I think).
There are two big Shapes, so I need to call Shape.intersects({x,y}) twice for multiple times per second to detect if current {x,y} is in the shape or not.
{x,y} are variable and are not touch/mouse events. So I can not use onmousemove,etc events.
Each twice call of Shape.intersects({x,y}) on Nexus 5 has a ~45ms overhead. ~45ms each 100ms! This make some hiccups in game experience.
The most straight solution is to make Shape.intersects({x,y}) non-blocking. But how? or do anyone have any idea around this problem?
I am using Kinetic v5.0.1
RESULT:
.intersects will re-generate the Shape in memory, this is why it is very costly. According the to the #MarkE answer below, using native canvas function context.isPointInPath(x, y) would be very efficient. But:
Note that this will only work for the last path (after using beginPath()). If you need to iterate several paths (ie. shapes) you need to re-construct the paths (no need to stroke or fill though). This is perhaps a reason why some don't use it.
Source: https://stackoverflow.com/a/17268669/172163
In my shapes, I had solid colors. Also there are multiple shapes dynamically generated. so I ended up with context.getImageData(x, y, 1, 1) to get the color of specific pixel, and if it is the color of my Shapes are not. It is more efficient than .intersects(). It costs only ~3ms.
Why intersects gives you a performance hit (pun intended!)
Internally, KineticJS uses context.getImageData to do hit-tests in the intersects method.
getImageData provides pixel-perfect hit testing but it's slow on mobile devices (as you've discovered).
To answer your question:
No, you cannot run KineticJS's internal methods on a background worker thread.
But...
If you really need to squeeze performance out of hit-testing, you can write your own custom hit-test.
One possibility for such a custom hit test would be to maintain 2 offscreen html5 canvas elements that mirror your 2 irregular shapes.
That way you can use canvas's context.isPointInPath(x,y) to do your hit-testing.
The importance is that isPointInPath uses GPU acceleration and is therefore quite fast.
If you feel ambitious you could even create an asynchronous background worker thread to execute isPointInPath while your UI thread continues.
You mention the Nexus5. You can spawn a background worker thread with Android's AsyncTask.
Your background task can return the results of your hit-test using it's onPostExecute(Result) callback which invokes on the UI thread.
http://developer.android.com/reference/android/os/AsyncTask.html
Good luck with your project!
I'm making a game in canvas where most of my drawings so far are solid shapes and arcs. But I want to add effects like blur and shadow to create glow and trail effects.
My question is, is there a a nice way, without external libraries, to cache the glowing element (player, enemy, etc) and is it worth it to do that instead of recreating the effects each time? Same goes for rotations. If I have about 40 different angles I draw repeatedly as a player rotates their ship, should I just cache those calculations?
I'm currently rotating the arc endings using manual transforms, instead of rotating the context since I don't yet know if that is more or less efficient than rotating the canvas repeatedly for the many onscreen elements and their different angles
Blur and shadow effects are not included in the main canvas library but you may create your own library or you can use external canvas libraries like easel.js but there are tons of different libraries related to HTML5 <canvas>. You may choose which is best suited for you.
As the matter of the blur effect you may use Mario Klingeman stackblur javascript implementation: http://www.quasimondo.com/StackBlurForCanvas/StackBlurDemo.html. It's easy to use and super fast.
For glowing effect a nice way for random texture generation would be the simplex noise algorithm already implemented in javascript: https://gist.github.com/304522 or the original one here: http://mrl.nyu.edu/~perlin/noise/
For the last question i advise you for a better performance to use the context save and restore functionality. It's much faster.