So, I read this answer, and while it seems to work, it dramatically lowers my FPS, and generally my program can only run for 10 seconds before WebGL just completely crashes.
Basically, I'm trying to implement lighting in my 2D top-down game. I'm doing this by having two WebGL canvases. In the first one, each pixel ranges from pure black to pure white, and each pixel represents the intensity of the light at that location. The second WebGL canvas is just my regular game canvas where everything in the game is rendered. Then, during runtime, these two canvases are blended together every frame, to simulate lighting.
I'm not just using a single canvas because I'm not sure how to do it with just one. I cannot simply darken my game canvas and then selectively brighten lit areas, because the original pixel colors are lost from the darkening shader. So instead I'm trying to use two canvases. The first one accumulates all the lighting values, and then when all the lighting values are determined, there is a single blending operation that blends the two canvases.
Again, this is working, but it's very slow and prone to crashes. I think the issue is that I am creating two textures every frame (one of the lighting canvas, and one of the game canvas) and then uploading all the texture data to each texture, and then blending.
Is there a more efficient way to perform this operation? It seems uploading all this texture data every frame is just completely killing performance.
Related
These days, I need to draw many images on a canvas. The canvas size is 800x600px, and I have many images of 256x256px(some is smaller) to draw on it, these small images will compose a complete image on the canvas. I have two ways to implement this.
First, if I use canvas 2D context, that is context = canvas.getContext('2d'), then I can just use context.drawimage() method to put every image on the proper location of the canvas.
Another way, I use WebGL to draw these images on the canvas. On this way, for every small image, I need to draw a rectangle. The size of the rectangle is the same as this small image. Besides, the rectangle is on the proper location of the canvas. Then I use the image as texture to fill it.
Then, I compare the performance of these two methods. Both of their fps will reach 60, and the animation(When I click or move by the mouse, canvas will redraw many times) looks very smooth. So I compare their CPU usage. I expect that when I use WebGL, the CPU will use less because GPU will assure many work of drawing. But the result is, the CPU usage looks almost the same. I try to optimize my WebGL code, and I think it's good enough. By google, I found that browser such as Chrome, Firefox will enable Hardware acceleration by default. So I try to close the hardware acceleration. Then the CPU usage of the first method becomes much higher. So, my question is, since canvas 2D use GPU to accelerate, is it necessary for me to use WebGL just for 2D rendering? What is different between canvas 2D GPU acceleration and WebGL? They both use GPU. Maybe there is any other method to lower the CPU usage of the second method? Any answer will be appreciate!
Canvas 2D is still supported more places than WebGL so if you don't need any other functionality then going with Canvas 2D would mean your page will work on those browsers that have canvas but not WebGL (like old android devices). Of course it will be slow on those devices and might fail for other reasons like running out of memory if you have a lot of images.
Theoretically WebGL can be faster because the default for canvas 2d is that the drawingbuffer is preserved whereas for WebGL it's not. That means if you turn anti-aliasing off on WebGL the browser has the option to double buffer. Something it can't do with canvas2d. Another optimization is in WebGL you can turn off alpha which means the browser has the option to turn off blending when compositing your WebGL with the page, again something it doesn't have the option to do with canvas 2d. (there are plans to be able to turn off alpha for canvas 2d but as of 2017/6 it's not widely supported)
But, by option I mean just that. It's up to the browser to decide whether or not to make those optimizations.
Otherwise if you don't pick those optimizations it's possible the 2 will be the same speed. I haven't personally found that to be the case. I've tried to do some drawImage only things with canvas 2d and didn't get a smooth framerate were as I did with WebGL. It made no sense to be but I assumed there was something going on inside the browser I was un-aware off.
I guess that brings up the final difference. WebGL is low-level and well known. There's not much the browser can do to mess it up. Or to put it another way you're 100% in control.
With Canvas2D on the other hand it's up to the browser what to do and which optimizations to make. They might changes on each release. I know for Chrome at one point any canvas under 256x256 was NOT hardware accelerated. Another example would be what the canvas does when drawing an image. In WebGL you make the texture. You decide how complicated your shader is. In Canvas you have no idea what it's doing. Maybe it's using a complicated shader that supports all the various canvas globalCompositeOperation, masking, and other features. Maybe for memory management it splits images into chucks and renders them in pieces. For each browser as well as each version of the same browser what it decides to do is up to that team, where as with WebGL it's pretty much 100% up to you. There's not much they can do in the middle to mess up WebGL.
FYI: Here's an article detailing how to write a WebGL version of the canvas2d drawImage function and it's followed by an article on how to implement the canvas2d matrix stack.
I recently made a project in WebGL, using Javascript and the 3D library three.js
However its perfomance is very poor, slow at the beginning and at best gets close to okay.
The objects of my game are: 1 car, 6 oranges, 161 cheerios, 1 table, 1 fork, 6 candles.
You control the car as in a race game (WASD or directional keys), which you drive through a circuit limited by cheerios. The car is composed of several three.js geometries (box, torus, cylinder, sphere). If an orange collides with the car, the player goes back to the beginning of the track and loses 1 life.
All oranges move in a straight uniform movement, and can kill the car if they collide with it. The orange model is composed of three.js geometry sphere and cylinder.
The table is a cube scaled to be 300x1x300 in xyz coordinates.
Each candle is a pointlight source, which intensity varies to give a flickering sensation.
Besides the 6 pointlights, there is also ambient light and 1 directional light, all created with three.js.
The fork as a billboard-like behaviour that rotates to be always pointing toward the current active camera, represented by a plane.
Whenever an orange reaches the end of its trajectory and temporarily disappears, or the car finishes a lap, an explosion of particles occurs.
Each explosion can have several particles (at least 100), and each particle is a very small plane with billboard-like behaviour.
Upon the creation of an explosion, all its particles are individually created and added to the scene.
Each explosion also has a time to live in miliseconds, usually 1000. When it expires, the explosion is removed from the scene.
All objects of the game have their own textures, and not all textures have a "good" size, i.e, dimensions as powers of 2 (32x32, 256x256, 1024x1024, etc). Each texture is loaded with a deprecated method THREE.ImageUtils.loadTexture(URL).
Everything was built with three.js, from the scene, cameras and lights, to the meshes, geometries and materials.
I noticed that after adding so many cheerios the perfomance diminished dramatically, so the problem may be rooted in the large amount of cheerios rendered each frame.
Since they all share the same model (a simple torus with a simple texture), is there any way of using only 1 model for all the cheerios (much like in openGL with VS libs)?
How can I improve its perfomance?
Tell me if you need more specific information regarding the problem.
Create a geometry. Then create cheerios meshes. After creating a mesh do not add it to the scene but merge it into geometry with
var globalCheeriosGeometry = new THREE.Geometry();
// create 161 cherios meshes and add them to global geometry
globalCheeriosGeometry.mergeMesh( cheeriosMesh );
thus you will create one geometry containing all the cherios from the scene. Then create one mesh with this geometry and add it to the scene. That will significantly reduce the number of draw calls from your scene.
I would guess its something along the lines of that you are calling an expensive (in terms of computation power) three.js method too many times. I would profile your game first to determine if problem is in cpu bound or gpu bound.
Besides the 6 pointlights, there is also ambient light and 1
directional light, all created with three.js.
lighting calculations are expensive individually per pixel, and they have to be done for every pixel. consider cutting down the light sources.
Each explosion can have several particles (at least 100), and each
particle is a very small plane with billboard-like behaviour.
I hope this is done via a billboard particle system and not as individual planes. Otherwise three js will probably do one draw call per plane.
I'm trying to project massive, 32k-resolution equirectangular maps on spheres.
Since a 32k texture is hardly accessible for older graphics cards supporting 1k-2k sized textures, and scaling a 32k image to 1k loses a tremendous amount of detail, I've resolved to splitting each source map by projecting each into 6 cube faces to make up a cubemap, so that more detail can be displayed on older cards.
However, I'm having trouble actually displaying these cubemapped spheres with three.js. I can set the MeshPhongMaterial.envMap to my cubemap, but of course this makes the sphere mesh reflect the texture instead.
An acceptable result can be produced by using ShaderMaterial along with ShaderLib['cube'] to "fake" a skysphere of some sort. But this drops all ability for lighting, normal mapping and all the other handy (and pretty) things possible with MeshPhongMaterial.
If at all possible, I'd like to not have to write an entire shader from scratch for such a simple tweak (switching one texture2D call to textureCube). Is there a way to coerce three.js to read the diffuse term from a cubemap instead of a 2D texture, or a simple way to edit the shader three.js uses internally?
Question:
Using WebGL, what is the most efficient way I can draw a bitmap on the offscreen buffer given that a raster operation must be performed for every pixel/fragment? Raster operations may vary between draw calls.
Context:
As part of the application I am developing, I have to develop a function used to copy a bitmap onto a destination buffer. The bitmap can be smaller than the canvas and is placed using offsets for the X and Y co-ordinates. Also, given a ROP code, a raster operation can be performed when copying the bitmap. The full list of ROPs can be found here.
In OpenGL ES 1.x, there was a function called glLogicOp() which would perform the required binary operations when drawing. However, since WebGL is based on OpenGL ES 2.0, this function does not exist. I cannot yet think of a straight forward way to solve this problem. I have a few ideas but I am not as yet confident they will offer optimal performance.
Possible solutions:
Perform the ROPs in Javascript by keeping a typed array with the colour data for the offscreen buffer. This however would rely on the CPU to compute the ROP for each pixel. On draw I would upload the offscreen data to a texture and draw it on the offscreen buffer using a simple shader.
Use the ping-pong texture technique described here. Having 2 textures I could sample from one and write on the other, however, I would be forced to draw the whole offscreen buffer on every draw call, even when the bitmap to be drawn covers only a small fraction of the canvas. A possible optimization would be to switch the textures only when an ROP dependent on the destination pixel is used.
Use readPixels() to read data from the offscreen buffer before a draw call requiring an ROP dependent on the destination pixel. The data of the pixels on which the bitmap will be drawn is read, uploaded to a texture and passed to the shader together with the bitmap itself to be used for sampling. The readPixels function however is said to be one of the slowest WebGL functions since it requires synchronization between the CPU and GPU.
EDIT
Using the ping post texture technique, I think I will have to draw the whole screen every time since, for every drawing, in order to sample the destination pixel correctly I would need a representation of the current offscreen (screen before drawing). Unless I draw the whole screen every time, both textures would only represent alternate drawings rather than the real offscreen image.
It is easier to show this visually but consider this example:
1) I draw a rectangle on the left half of the screen with the first draw call having tex1 as sampling source and tex2 as drawing destination.
2) I draw rectangle on the right half of the screen with the second draw call having tex2 as sampling source and tex1 as drawing destination.
3) When I try to draw a horizontal line across the whole screen on the next draw call, since tex1 (source) does not contain the rectangle drawn in the first draw call, sampling from the left part of the screen would result in wrong colors of destination pixels.
Well the way I think three.js handles the render is that it turns each element into an image and then draws it on a context.
where can I get more information on that.
and if I am right, is there a way to get that image, and manipulate it?
any information will be appreciated.
Three.js internally has a description of what the scene looks like in 3D space, including all the vertices and materials among other things. The rendering process takes that 3D representation and projects it into a 2D space. Three.js has several renderers, including WebGLRenderer (the most common), CanvasRenderer, and CSS3DRenderer. They all use different methods to draw that 2D projection:
WebGLRenderer uses JavaScript APIs that map to OpenGL graphics commands. As much as possible, the client's GPU takes parts of the 3D representation and more or less performs the 2D projection itself. As each geometry is rendered, it is painted onto a buffer. The complete buffer is a frame, which is the complete 2D projection of the 3D space that shows up in your <canvas>.
CanvasRenderer uses JavaScript APIs for 2D drawing. It does the 2D projection internally (which is slower) but otherwise works similarly to the WebGLRenderer at a high level.
CSS3DRenderer uses DOM elements and CSS3D transforms to represent the 3D scene. This roughly means that the browser takes normal 2D DOM elements and transforms them into 3D space to match the Three.js 3D internal representation, then projects them back onto the page in 2D.
(All this is highly simplified.)
It's important to understand that the frame rendered WebGL and Canvas representations is the resulting picture that you see on your screen, but it's not an <img>. Typically, your browser will render 60 frames per second. You can extract a frame by dumping the <canvas> into an image. Typically you'll want to stop the animation loop in order to do this as otherwise you might not be capturing the frame you want. Capturing frames this way is slow and given that your browser is rendering so many frames per second there are not easy ways to capture every frame.
Additionally, Chrome has built-in canvas inspection tools which allow you to take a closer look at each frame the browser paints.
You can't easily intercept the buffer as Three.js is rendering the frame, but you can draw directly onto the canvas as you normally would. renderer.context is the graphics context that Three.js draws onto, where renderer is the Renderer instance you create when setting up a Three.js scene. (A graphics context is basically a helper to assemble the buffer that makes up the frame.)