I am streaming video over a WebSocket by sending each frame in the raw ImageData format (4 bytes per pixel in RGBA order). When I receive each frame on the client (as an ArrayBuffer), I want to paint this image directly onto the canvas as efficiently as possible, using putImageData.
This is my current solution:
// buffer is an ArrayBuffer representing a properly-formatted image
var array = new Uint8ClampedArray(buffer);
var image = new ImageData(array, width, height);
canvas.putImageData(image, 0, 0);
But it is rather slow. My theories as to why:
the array (which is ~1MB in size) is being copied thrice, once into the Uint8ClampedArray, once into the ImageData, and lastly into the canvas, each frame (30 times per second).
I am using new twice for each frame, which may be a problem for the garbage collector.
Are these theories correct and if so, what tricks can I employ to make this as fast as possible? I am willing to accept an answer that is browser-specific.
No, both your ImageData image and your TypedArray array share the exact same buffer buffer.
These are just pointers, your original buffer is never "copied".
var ctx = document.createElement('canvas').getContext('2d');
var buffer = ctx.getImageData(0,0,ctx.canvas.width, ctx.canvas.height).data.buffer;
var array = new Uint8ClampedArray(buffer);
var image = new ImageData(array, ctx.canvas.width, ctx.canvas.height);
console.log(array.buffer === buffer && image.data.buffer === buffer);
For your processing time issue, the best way would be to simply send directly the video stream to a videoElement and use drawImage.
Related
I'm trying to figure out how exactly buffers work in WebGL and I'm a little stuck here. Below will be my guesses - please confirm or deny it.
const positions = new Float32Array([
-1, 1,
-0.5, 0,
-0.25, 0.25,
]);
let buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, positions, gl.STATIC_DRAW);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
We create an array of floats on RAM via JS.
WebGL creates an empty buffer directly on GPU and returns a reference on this buffer to JS. Now variable buffer is a pointer.
Set pointer on the buffer to gl.ARRAY_BUFFER.
Now we copy data from RAM to GPU buffer.
Unbind buffer from gl.ARRAY_BUFFER ( but the buffer is still available on GPU and we can rebind it more times).
So why we can't just call createBuffer() with positions instead of using ARRAY_BUFFER as a bridge between JS and GPU? Are these only limitations of OpenGL API or we have some strong reason to don't do this? Correct me if I'm wrong, but the allocation of memory with the known size is faster than allocating some memory and reallocation with positions size after we call bufferData.
Because that's the API is the only real answer.
Many people agree with you that a different API would be better. It's one reason why there are new apis (DirectX11/12, Vulkan, Metal, WebGPU)
But the description in the question isn't technically correct
We create an array of floats on RAM via JS.
WebGL creates an object the represents a GPU buffer (nothing is allocated on the GPU)
Set pointer on the buffer to gl.ARRAY_BUFFER.
Now we allocate a buffer and copy data from RAM to GPU buffer.
Unbind buffer from gl.ARRAY_BUFFER ( but the buffer is still available on GPU and we can rebind it more times).
Step 5 is not needed. There is no reason to unbind the buffer.
You can think of it like this. Imagine you had a javascript function that drew an image to a canvas but the image was passed in the same as buffers in your example. Here's the code
class Context {
constructor(canvas) {
this.ctx = canvas.getContext('2d');
}
bindImage(img) {
this.img = img;
}
drawImage(x, y) {
this.ctx.drawImage(this.img, x, y);
}
}
How let's say you want to draw 3 images
const ctx = new Context(someCanvas);
ctx.bindImage(image1);
ctx.drawImage(0, 0);
ctx.bindImage(image2);
ctx.drawImage(10, 10);
ctx.bindImage(image3);
ctx.drawImage(20, 20);
will work just fine. There's no reason to do
const ctx = new Context(someCanvas);
ctx.bindImage(image1);
ctx.drawImage(0, 0);
ctx.bindImage(null); // not needed
ctx.bindImage(image2);
ctx.drawImage(10, 10);
ctx.bindImage(null); // not needed
ctx.bindImage(image3);
ctx.drawImage(20, 20);
ctx.bindImage(null); // not needed
It's the same in WebGL. There are times to bind null to something, for example
gl.bindFramebuffer(gl.FRAMEBUFFER, null); // start drawing to the canvas
but most of the time unbinding is just a programmer's personal preference, not a something the API itself requires
references:
https://webglfundamentals.org/webgl/lessons/webgl-attributes.html
https://webglfundamentals.org/webgl/lessons/resources/webgl-state-diagram.html
https://stackoverflow.com/a/28641368/128511
Note that even my description above isn't technically correct. Whether or not step4 copies data to the GPU is undefined. It could just copy the data to RAM and only at draw time, if the buffer is used, and it hasn't yet been copied to the GPU, then copy it. Plenty of drivers do that. For a more concrete example of a driver not copying data to the GPU when it seems like it would see this answer and this one
I am writing a data viz app that requires me to process very large 2D arrays of data and convert that data into a scaled down image for display in a canvas in the DOM.
I am bumping up against DOM canvas size limitations. My arrays can be as large as 5000 x 5000. I want to get around the canvas size limitation by using createImageBitmap() to simultaneously scale down and convert the large array to an ImageBitMap of smaller size - 256 x 256 - for insertion into an onscreen canvas.
How can I convert the raw array data into the proper format? Will this approach work?
You can create and manipulate your image before rendering it to canvas. 5000x5000 shouldn't be too large for canvas though. What limitations are you running into? Answer here covers resizing as on canvas then grabbing data.
var raw = new Uint8ClampedArray(5000*5000*4); // 4 for RBGA
var imageData = new ImageData(raw, 5000,5000);
var bitmap = createImageBitmap(imageData);
I am retrieving pixels in a canvas imagedata and I'm doing that a lot.
I think the inserting and retrieving from and to the canvas imagedata is expensive in cpu time, so I want to make as few of those as possible.
One way of cutting that would be to make a single insert that would insert multiple pixels in a single sequence, but so far I have not been able to see how that would be done. All the examples I have seen so far retrieve and insert only a single pixel.
So the question is,
in order to speed up canvas imagedata pixel manipulation, how do I insert/retrieve multiple pixels simultaneously?
Just select a larger region when retrieving a pixel buffer:
var imageData = ctx.getImageData(x, y, width, height);
^^^^^^^^^^^^ not limited to one
Now your data buffer will contain all pixels for the given region. To get the whole canvas:
var imageData = ctx.getImageData(0, 0, ctx.canvas.width, ctx.canvas.height);
Adjust them and put back into the same position:
ctx.putImageData(imageData, x, y);
and you're done.
Remember that each pixel consists of four bytes (RGBA). To address a larger buffer you can do:
function getPixelIndex(x, y) {
return (y * width + x) * 4; // width used when getting buffer
}
Tips:
if you plan to update the same buffer often simply retrieve the buffer once and store a pointer to it, update it when you need and put back, then reuse the same buffer. This way you save the time getting the buffer. This won´t work if you in the mean time apply graphics to the canvas with standard methods.
You can also start with an empty buffer using createImageData() instead of getImageData().
If your pixel color data is more or less static you can update the buffer using a Uint32Array instead of the Uint8ClampedArray. You get a 32-bit version like this after getting the imageData:
var buffer32 = new Uint32Array(imageData.data.buffer);
Your new buffer32 will point to the same underlying byte buffer so no significant memory overhead, but it allows you to read and write 32-bit values instead of just 8-bit. Just be aware of that the byte order is (typically) little-endian so order the bytes as ABGR. Then do as before, call ctx.putImageData(imageData, x, y); when you need to update.
I can capture a full canvas with .todataurl without a problem. But I do not see or know if there is anyway to capture only a portion of the canvas and save that to image.
e.i. Mr. Potatohead script draws hats, hands feet faces etc etc. mixed all over the canvas and you can drag and drop them onto the mr potato in the center of the canvas. Press a button save the image of mr potato looking all spiffy to jpg for you. Without all the extra hats/feet/faces in the image.
I have resigned myself to the fact that this is impossible based on everything I've read. But you folks have proven to be smarter than google (or atleast google in my hands) a few times so i am taking a shot.
Sorry no code to post this time... unless you want this:
var canvas = document.getElementById("mrp");
var dataUrl = canvas.toDataURL();
window.open(dataUrl, "toDataURL() image", "width=800, height=600");
But that is just the example of dataurl i am working off of.. and it works outside of the fact it doesnt cap just the mr potato
My fallback is to pass the image to php and work with it there to cut out everything i dont want then pass it back.
EDIT
tmcw had a method for doing this. Not sure if its the way it SHOULD be done but it certainly works.
document.getElementById('grow').innerHTML="<canvas id='dtemp' ></canvas>";
var SecondaryCanvas = document.getElementById("dtemp");
var SecondaryCanvas_Context = SecondaryCanvas.getContext ("2d");
SecondaryCanvas_Context.canvas.width = 600;
SecondaryCanvas_Context.canvas.height = 600;
var img = new Image();
img.src = MainCanvas.toDataURL('image/png');
SecondaryCanvas_Context.drawImage(img, -400, -300);
var du = SecondaryCanvas.toDataURL();
window.open(du, "toDataURL() image", "width=600, height=600");
document.getElementById('grow').innerHTML="";
grow is an empty span tag, SecondaryCanvas is a var created just for this
SecondaryCanvas_Context is the getcontext of SecondaryCanvas
img created just to store the .toDataURL() of the main canvas containing the Mr. PotatoHead
drawImage with negative (-) offsets to move image of MainCanvas so that just the portion i want is showing.
Then cap the new canvas that was just created and open a new window with the .png
on and if you get an error from the script saying security err 18 its because you forgot to rename imgTop to img with the rest of the variables you copy pasted and chrome doesnt like it when you try to save local content images like that.
Here's a method that uses an off-screen canvas:
var canvas = document.createElement('canvas');
canvas.width = desiredWidth;
canvas.height = desiredHeight;
canvas.getContext('2d').drawImage(originalCanvas,x,y,w,h,0,0,desiredWidth, desiredHeight);
result = canvas.toDataURL()
Create a new Canvas object of a specific size, use drawImage to copy a specific part of your canvas to a specific area of the new one, and use toDataURL() on the new canvas.
A bit more efficient (and maybe a cleaner) way of extracting part of the image:
// x,y are position in the original canvas you want to take part of the image
// desiredWidth,desiredHeight is the size of the image you want to have
// get raw image data
var imageContentRaw = originalCanvas.getContext('2d').getImageData(x,y,desiredWidth,desiredHeight);
// create new canvas
var canvas = document.createElement('canvas');
// with the correct size
canvas.width = desiredWidth;
canvas.height = desiredHeight;
// put there raw image data
// expected to be faster as tere are no scaling, etc
canvas.getContext('2d').putImageData(imageContentRaw, 0, 0);
// get image data (encoded as bas64)
result = canvas.toDataURL("image/jpeg", 1.0)
you can give left,top,width and Height parameters to toDataURL function.. Here is the code to get data image depending on the object on canvas.
mainObj = "your desired Object"; // for example canvas._objects[0];
var image = canvas.toDataURL({ left: mainObj.left, top:mainObj.top,
width: mainObj.width*mainObj.scaleX, height: mainObj.height*mainObj.scaleY});
I have a Uint32Array I am trying to convert to a texture for WebGL. To do this I'm writing the array as RGBA values on a Canvas and getting a base64 encoded PNG from the canvas to send as a texture.
Whenever I set a pixel value to have an alpha of 0, the corresponding RGB channels are also zeroed upon conversion to a PNG. Is this an implementation detail? If I were to create PNGs in some other non-HTML5 program could I have an (RGBA) quadruplet of (255,255,255,0)? I tried using an alpha value of 1 and all other channels remain intact, so this is not an issue of premultiplied alpha.
Here is some javascript code to reproduce this effect:
var img = new Image();
var canvasObj = $('<canvas width="1" height="1"></canvas>');
var context = canvasObj[0].getContext('2d');
var imgd = context.getImageData(0,0,1,1);
var pix = imgd.data;
pix[0]=255; pix[1]=255; pix[2]=255; pix[3]=0;
context.putImageData(imgd,0,0);
img.src = canvasObj[0].toDataURL("image/png");
context.drawImage(img,0,0);
var imgd2 = context.getImageData(0,0,1,1);
var pix2 = imgd2.data;
pix2 will be all 0s.
Thanks!
It appears to be part of the PNG specification (http://www.libpng.org/pub/png/spec/1.2/png-1.2-pdg.html).
...fully transparent pixels should all be assigned the same
color value for best compression.
I couldn't find a direct source, but it seems like this particular implementation sets all the channels to zero.