Raster Operations (ROP) in WebGL (no glLogicOp) - javascript

Question:
Using WebGL, what is the most efficient way I can draw a bitmap on the offscreen buffer given that a raster operation must be performed for every pixel/fragment? Raster operations may vary between draw calls.
Context:
As part of the application I am developing, I have to develop a function used to copy a bitmap onto a destination buffer. The bitmap can be smaller than the canvas and is placed using offsets for the X and Y co-ordinates. Also, given a ROP code, a raster operation can be performed when copying the bitmap. The full list of ROPs can be found here.
In OpenGL ES 1.x, there was a function called glLogicOp() which would perform the required binary operations when drawing. However, since WebGL is based on OpenGL ES 2.0, this function does not exist. I cannot yet think of a straight forward way to solve this problem. I have a few ideas but I am not as yet confident they will offer optimal performance.
Possible solutions:
Perform the ROPs in Javascript by keeping a typed array with the colour data for the offscreen buffer. This however would rely on the CPU to compute the ROP for each pixel. On draw I would upload the offscreen data to a texture and draw it on the offscreen buffer using a simple shader.
Use the ping-pong texture technique described here. Having 2 textures I could sample from one and write on the other, however, I would be forced to draw the whole offscreen buffer on every draw call, even when the bitmap to be drawn covers only a small fraction of the canvas. A possible optimization would be to switch the textures only when an ROP dependent on the destination pixel is used.
Use readPixels() to read data from the offscreen buffer before a draw call requiring an ROP dependent on the destination pixel. The data of the pixels on which the bitmap will be drawn is read, uploaded to a texture and passed to the shader together with the bitmap itself to be used for sampling. The readPixels function however is said to be one of the slowest WebGL functions since it requires synchronization between the CPU and GPU.
EDIT
Using the ping post texture technique, I think I will have to draw the whole screen every time since, for every drawing, in order to sample the destination pixel correctly I would need a representation of the current offscreen (screen before drawing). Unless I draw the whole screen every time, both textures would only represent alternate drawings rather than the real offscreen image.
It is easier to show this visually but consider this example:
1) I draw a rectangle on the left half of the screen with the first draw call having tex1 as sampling source and tex2 as drawing destination.
2) I draw rectangle on the right half of the screen with the second draw call having tex2 as sampling source and tex1 as drawing destination.
3) When I try to draw a horizontal line across the whole screen on the next draw call, since tex1 (source) does not contain the rectangle drawn in the first draw call, sampling from the left part of the screen would result in wrong colors of destination pixels.

Related

How to blend two WebGL canvases in real-time?

So, I read this answer, and while it seems to work, it dramatically lowers my FPS, and generally my program can only run for 10 seconds before WebGL just completely crashes.
Basically, I'm trying to implement lighting in my 2D top-down game. I'm doing this by having two WebGL canvases. In the first one, each pixel ranges from pure black to pure white, and each pixel represents the intensity of the light at that location. The second WebGL canvas is just my regular game canvas where everything in the game is rendered. Then, during runtime, these two canvases are blended together every frame, to simulate lighting.
I'm not just using a single canvas because I'm not sure how to do it with just one. I cannot simply darken my game canvas and then selectively brighten lit areas, because the original pixel colors are lost from the darkening shader. So instead I'm trying to use two canvases. The first one accumulates all the lighting values, and then when all the lighting values are determined, there is a single blending operation that blends the two canvases.
Again, this is working, but it's very slow and prone to crashes. I think the issue is that I am creating two textures every frame (one of the lighting canvas, and one of the game canvas) and then uploading all the texture data to each texture, and then blending.
Is there a more efficient way to perform this operation? It seems uploading all this texture data every frame is just completely killing performance.

Create path from (inner and outer) edges of a blob-like image

I'm looking for a way to create a svg like path from a binary image (only black and white pixels). The image itself will be a blob with irregular shape that could have holes in it.
Without holes I only need a bounding path the recreates the border of the blob. When there are holes in the blob, I'm fine with additional paths (as one path alone wont be able to recreate this, I guess). At the end I just need to know which path is the outer one and which are the holes.
I already found these:
How to add stroke/outline to transparent PNG image in JavaScript canvas
Creating a path from the edge of an image
How can I find hole in a 2D matrix?
Additionally I need the detection of holes. It doesn't really matter to me if the result is a polygon or a path. I just need the points with high enough accuracy that curves keep being curvy :)
It would be great if someone has an idea or even some further sources.
PS: I'm working with canvas and javascript (fabricJS) if this makes any difference.
Finally I successfully went with the other option as markE described (although it's a bit modified). I'm using the Marching Squares Algorithm (MSA) and the Floodfill Algorithm (FFA) to achieve this. Simplifying the resulting points is done via Douglas-Peucker Algorithm (DPA).
MAA: https://stackoverflow.com/a/25875512/2577116
FFA: http://www.williammalone.com/articles/html5-canvas-javascript-paint-bucket-tool/
DPA: https://stackoverflow.com/a/22516982/2577116
(Smoothing: https://stackoverflow.com/a/7058606/2577116)
I put everything together in this jsFiddle.
Steps:
get path object after user finished free drawing
create image from path via base64 dataURL
convert to binary image (only 0 and 255 pixel, no transparency)
apply FFA on position 0,0 with random color, save color
go to next pixel
if pixel has known floodfill color or path color (black), move on to next
otherwise floodfill with new random color, save color
move over all pixels, repeating 5.-7.
remove saved color on index 1 (it's the color surrounding the path contour (padding), so it's neither the path nor a hole)
for all other colors apply MSA and simplify resulting points (with DPA)
Either create polygons from simplified points OR ...
... smooth points and create path
add to canvas, remove input path
DONE :)
For simpler code my random color at the moment only creates shades of grey. R=G=B and A=255 allows for simpler checks. On the other hand this solution limits the contour to have max. 254 holes (256 shades of grey - path color (0) - padding color (no hole)). If one needs more it's no problem to extend the code to create random values for R, G, B and even A. Don't forget to adopt the color checks accordingly ;)
The whole algorithm may not be optimized for performance but honestly I see no need to do so at the moment. It's fast enough for my use-case. Anyway, if someone has a hint regarding optimization I'm glad to hear/read about :)
Best Option
If you drew the Blobs with your code, then the simplest & best way is to decompose each blob (and sub-blob) into it's component Bezier curves. FabricJS is open source so you can see how they create the curves -- and therefore how you can decompose the curves. The result will be a dozen or so Bezier curves that are easy to redraw or navigate. If you need help navigating Bezier Curves, see this tutorial covering Navigating along a Path.
Other Option
You will need to get the pixel information, so you will need to context.drawImage your Fabric Blob onto a native canvas and use context.getImagedata to fetch the pixel information.
Assuming:
All pixels are either white or black.
The blob is black: rgba(0,0,0,255)
Outside the blob is white: rgba(255,255,255,255)
The holes in the blob are white: rgba(255,255,255,255)
A plan to find the blob & hole paths:
Load the imageData: context.getImageData(0,0,canvas.width,canvas.height)
Find a white pixel on the perimeter of the image.
Use a FloodFill Algorithm (FFA) to replace the outer white with transparency.
Use the Marching Squares Algorithm (MSA) find the outermost blob perimeter and save that blob path.
Use a Floodfill Algorithm to fill the blob you've discovered in #4 with transparency. This makes the outer blob "invisible" to the next round of MSA. At this point you only have white holes -- everything else is transparent.
Use the Marching Squares Algorithm (MSA) find the perimeter of the next white hole and save that hole path.
Use a Floodfill algorithm to fill the white hole in #6 with transparency. This makes this hole invisible to the next round of MSA.
Repeat #6 & #7 to find each remaining white hole.
If MSA reports no pixels you're done.
For efficiency, you can repeatedly use the imageData from Step#1 in the subsequent steps. You can abandon the imageData when you have completed all the steps.
Since blobs are curves, you will find your blob paths contain many points. You might use a path point reduction algorithm to simplify those many points into fewer points.

Vectorize canvas drawing

I'm working in image edition in JavaScript. I have to create mask with different tools (rectangle, brush, magic wand...) and save it to the database. To avoid sending all the pixels coordinates, I'd like to vectorize the mask and send only the vertex.
How can I get that vertex from the CANVAS context?
The canvas is a raster surface. If you are treating it as a vector surface then you will need to record all of the operations applied in a separate data structure and send that to the server.
You may find that using SVG makes more sense than canvas in your case.

Get the image of an element that gets drawn in the three.js while rendering

Well the way I think three.js handles the render is that it turns each element into an image and then draws it on a context.
where can I get more information on that.
and if I am right, is there a way to get that image, and manipulate it?
any information will be appreciated.
Three.js internally has a description of what the scene looks like in 3D space, including all the vertices and materials among other things. The rendering process takes that 3D representation and projects it into a 2D space. Three.js has several renderers, including WebGLRenderer (the most common), CanvasRenderer, and CSS3DRenderer. They all use different methods to draw that 2D projection:
WebGLRenderer uses JavaScript APIs that map to OpenGL graphics commands. As much as possible, the client's GPU takes parts of the 3D representation and more or less performs the 2D projection itself. As each geometry is rendered, it is painted onto a buffer. The complete buffer is a frame, which is the complete 2D projection of the 3D space that shows up in your <canvas>.
CanvasRenderer uses JavaScript APIs for 2D drawing. It does the 2D projection internally (which is slower) but otherwise works similarly to the WebGLRenderer at a high level.
CSS3DRenderer uses DOM elements and CSS3D transforms to represent the 3D scene. This roughly means that the browser takes normal 2D DOM elements and transforms them into 3D space to match the Three.js 3D internal representation, then projects them back onto the page in 2D.
(All this is highly simplified.)
It's important to understand that the frame rendered WebGL and Canvas representations is the resulting picture that you see on your screen, but it's not an <img>. Typically, your browser will render 60 frames per second. You can extract a frame by dumping the <canvas> into an image. Typically you'll want to stop the animation loop in order to do this as otherwise you might not be capturing the frame you want. Capturing frames this way is slow and given that your browser is rendering so many frames per second there are not easy ways to capture every frame.
Additionally, Chrome has built-in canvas inspection tools which allow you to take a closer look at each frame the browser paints.
You can't easily intercept the buffer as Three.js is rendering the frame, but you can draw directly onto the canvas as you normally would. renderer.context is the graphics context that Three.js draws onto, where renderer is the Renderer instance you create when setting up a Three.js scene. (A graphics context is basically a helper to assemble the buffer that makes up the frame.)

HTML 5 Canvas - Get pixel data for a path

I wonder if is there any way I can get pixel data for the currently drawn path in the canvas tag.
I can calculate the pixel data on my own when drawing simple shapes like square or a line, but things get messy with more complicated shapes like ellipse or even a simple circle.
The reason i'm asking this is because I'm working on a web application which involves sending canvas pixels data to the server when I add a path to the canvas. The server needs to keep it's own copy of the entire canvas, and I really don't want to send the ENTIRE canvas image every single change, but only the delta for efficiency reasons...
Thanks.
Step 1: Draw the paths to a second canvas of the same size as you original, let's call it the 'path canvas'.
Step 2: Set the globalCompositeOperation of the path canvas to 'destination-in'.
Step 3: draw you original canvas onto the path canvas
Step 4: Loop through all the pixels of the path canvas and store the pixels that are not transparent in whatever format you're sending them to the server.

Categories

Resources