I'm migrating my project from Javascript Canvas to Javascript WebGL.
In some portions of my code I used Canvas's globalCompositeOperation for things such as lighting and fog.
Now that I am using WebGL I no longer have access to these, but instead have access to blendFunc. The global composite operation I am specifically looking to migrate from Canvas to WebGL is soft-light, but there are two problems:
I don't see that item on the list of blend functions
I'm not sure where to find the source for soft-light, so I do not know how to emulate it.
I suppose my question is two-part. One, are there any sources that list how the various global composite operations were implemented, and if so, are there any corresponding WebGL implementations? And if not, how can I convert soft-light to a WebGL blending function for use in my application?
That is, in Canvas I would do something like this:
context.globalCompositeOperation = 'soft-light';
// draw images
context.globalCompositeOperation = 'source-over'; // reset back to default
What would be the equivalent in WebGL? Something like.. (this is close but not accurate)
gl.blendFunc(gl.DST_COLOR, gl.ONE_MINUS_SRC_ALPHA);
// draw images
gl.blendFunc(gl.ONE, gl.ZERO); // reset back to default
Related
I'm working on a project and would like to use both p5.js and fabric.js on the same canvas. I need the functionality of fabric.js to drag around pictures on the canvas and p5.js to dynamically draw lines between the pictures as they're being dragged. I'm not sure if this is possible because it seems like both have their own separate canvas creation functions
p5.js
createCanvas(100, 50);
fabric.js
canvas = new fabric.Canvas('c');
The fabric line class seems a little too rigid to accomplish the effect I'm after, so I'm looking for either an idea for a workaround or a different library that would be better for drawing dynamic lines on a fabric canvas.
It is not possible to combine both of them with one canvas element. These libraries take full control of the canvas of reference. Even if you were able to initialise both libraries on the same canvas element you will loose whatever was displayed by p5.js on the first FarbicJS canvas.renderAll() function call.
I don't know what exactly is your problem, but as an alternative I think you could try to have two canvases on top of each other (since the canvas element is invisible by default). One running on FabricJS and one on p5.js and somehow interact with each other. But, that will add some additional complexity.
I am new trying out a high-speed image display in HTML5/JS. I am fairly new to HTML5/JS and is trying to evaluate options available for rendering images on to screen.
Input is an array of pixels.
I played around with CanvasRenderingContext2D, and used a canvas and putImageData to render pixels onto screen, but I ended up with slower FPS with increase in image size. The use case I want to address is rendering raw pixel arrays onto screen, and not drawing shapes/drawings. My target is to render at the highest possible FPS.
var imageCanvas = document.getElementById('imageCanvas');
var imageContext = imageCanvas.getContext('2d', { alpha: false });
var pixelData = imageContext.createImageData(imageWidth, imageWidth);
// ---------------------------------------------------------
// here: Fill up pixelData with image pixels to be rendered
// ---------------------------------------------------------
imageContext.putImageData(pixelData , 0, 0);
For such an use-case, how should I proceed? I read at many places putImageData is the bottleneck here, and a better way would be to use DrawImage. Is it possible to avoid using putImageData and use DrawImage to read from pixelData and render?
How efficiently can I use GPU?
Does WebGLRenderingContext suit this scenario?
For high performance graphics on html canvas.
There's only one way to do .
Its by making use of webgl provided by glES.
using core opengl could be annoying so you may want to use some libraries like Three.js or pixijs
I thought this would be an easy question to find an answer for, but it's turning out to be very elusive. Basically, I'm trying to use WebGL to do the heavy lifting on some image processing and generation tasks, and I want it to work offscreen. Ideally, I'd like WebGL to render a scene to a framebuffer that I can gl.readPixels() from, or to the webgl canvas so I can use it as a source for context.drawImage(). The thing is, I don't want to display the webgl canvas itself, I only want to copy portions of it to a visible canvas that I have a regular "2d" context for. So far, I can't seem to get it to work without the following initialization, which seems like an ugly hack to me:
glCanvas = document.createElement('canvas');
glCanvas.width = 256;
glCanvas.height = 256;
document.body.appendChild(glCanvas); // seems necessary, but why?
glCanvas.style.visibility = 'hidden'; // ugh, is this the only way?
Currently, I'm rendering a scene to a framebuffer I create with:gl.createFramebuffer(), using a single gl.drawArrays() call, and then using gl.readPixels() to get the result out to an ImageData. With the glCanvas attached to the DOM it works exactly as planned. But if I leave off those last 2 lines of code above where I attach the glCanvas and hide it, I get no image when I try to readPixels from the framebuffer I've rendered to.
I don't understand why the glCanvas needs to be attached to the DOM, if it's not even being rendered to. I thought that rendering to a framebuffer happens entirely on the graphics card and that the buffer will persist for me to use as a texture. Is it simply the case that WebGL doesn't render at all unless there is at least one webgl-context canvas attached to the DOM? Or am I missing some other detail? Thanks!
You don't have to have the canvas in the DOM to use WebGL. There's plenty of tests in the WebGL Conformance Tests that run without ever adding a canvas to the DOM
var gl = document.createElement("canvas").getContext("webgl");
gl.clearColor(1,0,1,1); // purple;
gl.clear(gl.COLOR_BUFFER_BIT);
// draw in 2d canvas
var ctx = document.querySelector("#c2d").getContext("2d");
ctx.drawImage(gl.canvas, 10, 20, 30, 40);
<canvas id="c2d"></canvas>
Maybe you should post some more code. You probably have some other bug.
Update
Coming soon is also the OffscreenCanvas API. As of 2017/11/15 It's available in both Firefox and Chrome behind a flag. It allows using WebGL in a Web Worker
Update 2
As of Chrome 69 OffscreenCanvas is shipping without a flag.
It seems like you want to use an OffScreenCanvas.
You can learn more here.
You should do something like this:
const offscreen = document.querySelector('canvas').transferControlToOffscreen();
const worker = new Worker('myworkerurl.js');
worker.postMessage({canvas: offscreen}, [offscreen]);
These days, I need to draw many images on a canvas. The canvas size is 800x600px, and I have many images of 256x256px(some is smaller) to draw on it, these small images will compose a complete image on the canvas. I have two ways to implement this.
First, if I use canvas 2D context, that is context = canvas.getContext('2d'), then I can just use context.drawimage() method to put every image on the proper location of the canvas.
Another way, I use WebGL to draw these images on the canvas. On this way, for every small image, I need to draw a rectangle. The size of the rectangle is the same as this small image. Besides, the rectangle is on the proper location of the canvas. Then I use the image as texture to fill it.
Then, I compare the performance of these two methods. Both of their fps will reach 60, and the animation(When I click or move by the mouse, canvas will redraw many times) looks very smooth. So I compare their CPU usage. I expect that when I use WebGL, the CPU will use less because GPU will assure many work of drawing. But the result is, the CPU usage looks almost the same. I try to optimize my WebGL code, and I think it's good enough. By google, I found that browser such as Chrome, Firefox will enable Hardware acceleration by default. So I try to close the hardware acceleration. Then the CPU usage of the first method becomes much higher. So, my question is, since canvas 2D use GPU to accelerate, is it necessary for me to use WebGL just for 2D rendering? What is different between canvas 2D GPU acceleration and WebGL? They both use GPU. Maybe there is any other method to lower the CPU usage of the second method? Any answer will be appreciate!
Canvas 2D is still supported more places than WebGL so if you don't need any other functionality then going with Canvas 2D would mean your page will work on those browsers that have canvas but not WebGL (like old android devices). Of course it will be slow on those devices and might fail for other reasons like running out of memory if you have a lot of images.
Theoretically WebGL can be faster because the default for canvas 2d is that the drawingbuffer is preserved whereas for WebGL it's not. That means if you turn anti-aliasing off on WebGL the browser has the option to double buffer. Something it can't do with canvas2d. Another optimization is in WebGL you can turn off alpha which means the browser has the option to turn off blending when compositing your WebGL with the page, again something it doesn't have the option to do with canvas 2d. (there are plans to be able to turn off alpha for canvas 2d but as of 2017/6 it's not widely supported)
But, by option I mean just that. It's up to the browser to decide whether or not to make those optimizations.
Otherwise if you don't pick those optimizations it's possible the 2 will be the same speed. I haven't personally found that to be the case. I've tried to do some drawImage only things with canvas 2d and didn't get a smooth framerate were as I did with WebGL. It made no sense to be but I assumed there was something going on inside the browser I was un-aware off.
I guess that brings up the final difference. WebGL is low-level and well known. There's not much the browser can do to mess it up. Or to put it another way you're 100% in control.
With Canvas2D on the other hand it's up to the browser what to do and which optimizations to make. They might changes on each release. I know for Chrome at one point any canvas under 256x256 was NOT hardware accelerated. Another example would be what the canvas does when drawing an image. In WebGL you make the texture. You decide how complicated your shader is. In Canvas you have no idea what it's doing. Maybe it's using a complicated shader that supports all the various canvas globalCompositeOperation, masking, and other features. Maybe for memory management it splits images into chucks and renders them in pieces. For each browser as well as each version of the same browser what it decides to do is up to that team, where as with WebGL it's pretty much 100% up to you. There's not much they can do in the middle to mess up WebGL.
FYI: Here's an article detailing how to write a WebGL version of the canvas2d drawImage function and it's followed by an article on how to implement the canvas2d matrix stack.
I'm creating a whiteboard application using ASP.NET/JavaScript. HTML5 canvas is my current tool for developing whiteboard features (drawing, text, image, shape ...). How can I save my scene so that I can load it later (and continue drawing)? I've seen a function for saving PNG image of the canvas, is it the only way of saving canvas data? or is there another way that enables me to save canvas data along their semantics, for instance can I save lines, shapes, texts and images drawing on the canvas? what are my options?
A canvas is a rasterised output of the instructions given to it, purely pixel data is stored. If you need to know the steps taken to get there you will need to log them in the javascript you are using to create it in the first place. Otherwise, if it was relatively simple, you could draw each element to stacked / offscreen canvases and export these separately. Obviously this could get expensive depending on the amount you need to do.
There are things like this that 'undo' canvas operations but essentially it's just storing an array of lines and redrawing them all from scratch every time. If you click undo it removes a line from the array, otherwise it adds one. To do what you desire you would need to store an array of operations to be drawn like this from a completely black canvas.