I am new trying out a high-speed image display in HTML5/JS. I am fairly new to HTML5/JS and is trying to evaluate options available for rendering images on to screen.
Input is an array of pixels.
I played around with CanvasRenderingContext2D, and used a canvas and putImageData to render pixels onto screen, but I ended up with slower FPS with increase in image size. The use case I want to address is rendering raw pixel arrays onto screen, and not drawing shapes/drawings. My target is to render at the highest possible FPS.
var imageCanvas = document.getElementById('imageCanvas');
var imageContext = imageCanvas.getContext('2d', { alpha: false });
var pixelData = imageContext.createImageData(imageWidth, imageWidth);
// ---------------------------------------------------------
// here: Fill up pixelData with image pixels to be rendered
// ---------------------------------------------------------
imageContext.putImageData(pixelData , 0, 0);
For such an use-case, how should I proceed? I read at many places putImageData is the bottleneck here, and a better way would be to use DrawImage. Is it possible to avoid using putImageData and use DrawImage to read from pixelData and render?
How efficiently can I use GPU?
Does WebGLRenderingContext suit this scenario?
For high performance graphics on html canvas.
There's only one way to do .
Its by making use of webgl provided by glES.
using core opengl could be annoying so you may want to use some libraries like Three.js or pixijs
Related
I'm migrating my project from Javascript Canvas to Javascript WebGL.
In some portions of my code I used Canvas's globalCompositeOperation for things such as lighting and fog.
Now that I am using WebGL I no longer have access to these, but instead have access to blendFunc. The global composite operation I am specifically looking to migrate from Canvas to WebGL is soft-light, but there are two problems:
I don't see that item on the list of blend functions
I'm not sure where to find the source for soft-light, so I do not know how to emulate it.
I suppose my question is two-part. One, are there any sources that list how the various global composite operations were implemented, and if so, are there any corresponding WebGL implementations? And if not, how can I convert soft-light to a WebGL blending function for use in my application?
That is, in Canvas I would do something like this:
context.globalCompositeOperation = 'soft-light';
// draw images
context.globalCompositeOperation = 'source-over'; // reset back to default
What would be the equivalent in WebGL? Something like.. (this is close but not accurate)
gl.blendFunc(gl.DST_COLOR, gl.ONE_MINUS_SRC_ALPHA);
// draw images
gl.blendFunc(gl.ONE, gl.ZERO); // reset back to default
Trying to make a simple animation engine for an HTML5 game. I have a timeline along the top composed of frame previews and a wireframe editor with a stickman for testing on a bigger, bottom canvas.
When I switch frames, I use drawImage() to copy the bottom canvas to a small frame preview along the top.
Is there a way to save the bottom canvas' current state in an array (each item corresponding to each frame) so that I can use drawImage() to place them as onion skins?
Here's the project if you want to see it visually:
https://jsfiddle.net/incon/owrj7e5z/
(If you look at the project, it doesn't really work yet, it's a big work in progress and is very glitchy)
This is what I was trying, and it works for previews, but won't redraw as onion skins on the bottom canvas.
var framePreviews = [undefined,undefined,undefined....];
var c = document.getElementById('bottomCanvas');
var tContext = document.getElementById('timelineCanvas').getContext('2d');
framePreviews[simulation.currentFrame] = c; // save canvas in an array
tContext.drawImage(framePreviews[simulation.currentFrame], simulation.currentFrame*60, 0, 60, 60/c.width*c.height); // draw the canvas 60px wide in the timeline
Don't.
An image is really heavy, whatever the mean of saving it you'll choose, there will always at least be once the ImageBitmap of your full sized canvas that will need to be stored in your browser's memory, and in a few frames, you'll blow it out.
You don't do any heavy calculations in your app, so you don't need to store these as images.
Instead only save the drawing commands, and redo the drawings every time.
Now, for the ones who do have to save computationally heavy frames, then the best is to generate an ImageBitmap from your canvas, but once again, this should be used only if drawing this one frame takes more than 16ms.
These days, I need to draw many images on a canvas. The canvas size is 800x600px, and I have many images of 256x256px(some is smaller) to draw on it, these small images will compose a complete image on the canvas. I have two ways to implement this.
First, if I use canvas 2D context, that is context = canvas.getContext('2d'), then I can just use context.drawimage() method to put every image on the proper location of the canvas.
Another way, I use WebGL to draw these images on the canvas. On this way, for every small image, I need to draw a rectangle. The size of the rectangle is the same as this small image. Besides, the rectangle is on the proper location of the canvas. Then I use the image as texture to fill it.
Then, I compare the performance of these two methods. Both of their fps will reach 60, and the animation(When I click or move by the mouse, canvas will redraw many times) looks very smooth. So I compare their CPU usage. I expect that when I use WebGL, the CPU will use less because GPU will assure many work of drawing. But the result is, the CPU usage looks almost the same. I try to optimize my WebGL code, and I think it's good enough. By google, I found that browser such as Chrome, Firefox will enable Hardware acceleration by default. So I try to close the hardware acceleration. Then the CPU usage of the first method becomes much higher. So, my question is, since canvas 2D use GPU to accelerate, is it necessary for me to use WebGL just for 2D rendering? What is different between canvas 2D GPU acceleration and WebGL? They both use GPU. Maybe there is any other method to lower the CPU usage of the second method? Any answer will be appreciate!
Canvas 2D is still supported more places than WebGL so if you don't need any other functionality then going with Canvas 2D would mean your page will work on those browsers that have canvas but not WebGL (like old android devices). Of course it will be slow on those devices and might fail for other reasons like running out of memory if you have a lot of images.
Theoretically WebGL can be faster because the default for canvas 2d is that the drawingbuffer is preserved whereas for WebGL it's not. That means if you turn anti-aliasing off on WebGL the browser has the option to double buffer. Something it can't do with canvas2d. Another optimization is in WebGL you can turn off alpha which means the browser has the option to turn off blending when compositing your WebGL with the page, again something it doesn't have the option to do with canvas 2d. (there are plans to be able to turn off alpha for canvas 2d but as of 2017/6 it's not widely supported)
But, by option I mean just that. It's up to the browser to decide whether or not to make those optimizations.
Otherwise if you don't pick those optimizations it's possible the 2 will be the same speed. I haven't personally found that to be the case. I've tried to do some drawImage only things with canvas 2d and didn't get a smooth framerate were as I did with WebGL. It made no sense to be but I assumed there was something going on inside the browser I was un-aware off.
I guess that brings up the final difference. WebGL is low-level and well known. There's not much the browser can do to mess it up. Or to put it another way you're 100% in control.
With Canvas2D on the other hand it's up to the browser what to do and which optimizations to make. They might changes on each release. I know for Chrome at one point any canvas under 256x256 was NOT hardware accelerated. Another example would be what the canvas does when drawing an image. In WebGL you make the texture. You decide how complicated your shader is. In Canvas you have no idea what it's doing. Maybe it's using a complicated shader that supports all the various canvas globalCompositeOperation, masking, and other features. Maybe for memory management it splits images into chucks and renders them in pieces. For each browser as well as each version of the same browser what it decides to do is up to that team, where as with WebGL it's pretty much 100% up to you. There's not much they can do in the middle to mess up WebGL.
FYI: Here's an article detailing how to write a WebGL version of the canvas2d drawImage function and it's followed by an article on how to implement the canvas2d matrix stack.
I'm new to HTML5 and JavaScript, and I'm trying to use the canvas element to create a high(ish) quality output image.
The current purpose is to allow users to all their own images (jpeg, png, svg). this works like a charm. however when I render the image it's always a 32-bit png. how can I create a higher quality picture using JavaScript(preferably) ?
when I output the file, it always seems to keep the same resolution as the canvas, how can I change this using JavaScript(preferably)
Thanks in Advance guys, I looked around for a while but I couldn't find the answer to this :(
If you want to create a larger image with getImageData or toDataURL then you have to:
Create an in-memory canvas that is larger than your normal canvas
Redraw your entire scene onto your in-memory canvas. You will probably want to call ctx.scale(theScale, theScale) on your in-memory context in order to scale your images, but this heavily depends on how they were created in the first place (images? Canvas paths?)
Call getImageData or toDataURL on the in-memory canvas and not your
normal canvas
By in-memory canvas I mean:
var inMem = document.createElement('canvas');
// The larger size
inMem.width = blah;
inMem.height = blah;
Well firstly, when you draw the image to the canvas it's not a png quite yet, it's a simple raw bitmap where the canvas API works on as you call it's methods, then it's converted to a png in order for the browser to display it, and that's what you get when you use toDataURL. When using toDataURL you're able to choose which image format you want the output to be, I think jpeg and bmp are supported along with png. (Don't think of it as a png converted to another format, cause it's not)
And I don't know what exactly do you mean by higher quality by adding more bits to a pixel, you see 32 bits are enough for all RGBA channels to have a true color 8 bits depth giving you way more colors than the human eye can see at once. I know depending on the lighting and angle in which the user is exposed to your picture his perception of the colors may vary but not the quality of it which I'd say only depends on the resolution it has. Anyway the canvas was not designed to work with those deeper colors and honestly that much color information isn't even necessary on any kind of scene you could render on the canvas, that's only relevant for high definition movies and games made by big studios, also, even if you could use deep colors on the canvas it would really depend on the support of the user's videocard and screen which I think the majority of people doesn't have.
If you wish to add information not directly concerned to the color of each pixel but maybe on potencial transformations they could have you better create your own type capable of carrying the imageData acceptable by the canvas API, keeping it's 32-bit format, and the additional information on a corresponding array.
And yes, the output image has the same resolution as the canvas do but there are a couple of ways provided for you to resize your final composition. Just do as Simon Sarris said, create an offscreen canvas which resolution is the final resolution you want your image to be, then, you can either:
Resize the raster image by calling drawImage while providing the toDataURL generated image making use of the resizing parameters
Redraw your scene there, as Simon said, which will reduce quality loss if your composition contains shapes created through the canvas API and not only image files put together
In case you know the final resolution you want it to be beforehand then just set the width and height of the canvas to it, but the CSS width and height can be set differently in order for your canvas to fit in your webpage as you wish.
I am trying to perform simple operations on image using javascript. To get the pixels of the image, I am drawing the image on canvas and then get the ImageData from canvas. But for large images, drawing them on the canvas takes a lot of time.
Is there any other way of getting the image pixels without using the canvas element?
I don't think you can have image manipulation in JavaScript with hardware acceleration, so no matter if it's canvas or other technology you won't gain much marginal benefit within JavaScript environment.
If you want your code to be hardware accelerated, your code must be compiled into something that is ran in a GPU and has access to graphics memory. Flash and Silverlight's approach is introducing shading language like AGAL and HLSL. Maybe JavaScript will have something like this in the future.
So my suggestion is to do image processing with:
Server side technology
Flash and Silverlight with shading language
The example below uses MarvinJ. It takes a colored image having 2800x2053 resolution, iterate through each pixel calculating the gray value and setting it back. Above the canvas there is a div to print the processing time. On my machine, it took 115ms including image loading.
var time = new Date().getTime();
var canvas = document.getElementById("canvas");
image = new MarvinImage();
image.load("https://i.imgur.com/iuGXHFj.jpg", imageLoaded);
function imageLoaded(){
for(var y=0; y<image.getHeight(); y++){
for(var x=0; x<image.getWidth(); x++){
var r = image.getIntComponent0(x,y);
var g = image.getIntComponent1(x,y);
var b = image.getIntComponent2(x,y);
var gray = Math.floor(r * 0.21 + g * 0.72 + b * 0.07);
image.setIntColor(x,y,255,gray,gray,gray);
}
}
image.draw(canvas);
$("#time").html("Processing time: "+(new Date().getTime()-time)+"ms");
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://www.marvinj.org/releases/marvinj-0.7.js"></script>
<div id="time"></div>
<canvas id="canvas" width="2800" height="2053"></canvas>
I don't have much experience with Javascript but Max Novakovic(#betamax) wrote a nice a tiny jQuery plugin called $.getImageData which might be helpful.
You can read more about $.getImageData on the disturb blog and the plugin page
You should be able to access pixels through something like context.getImageData(x, y, width, height).data.
Also, you mentioned hardware acceleration, and that should be available in Firefox4 and Chrome. Shadertoy uses GLSL shaders in the browser:
I'm not sure, but might it be possible to write the image data as a string, and manipulate the string, then translate it back into an image before showing the resulting image?
This wouldn't require drawing to canvas, only loading the image data as a string instead of an image. It would, however, involve some complex and possibly difficult to get right string manipulation, to make sure the image data translated correctly and moreso to manipulate the pixels.
It would also mean the string is likely to be very long for larger images, so this might take more memory than canvas would and could potentially need to be split into multiple strings. In exchange it may be faster than drawing to canvas, especially with multiple strings if you only need to manipulate part of the image.
I'm not experienced with image manipulation but theoretically there's no reason a file's data couldn't be written into a string. It is just, again, going to make a very long string and have a possible RAM impact because of it because the string could take up more RAM than the image file itself. But it will be faster loading since it doesn't have to process the data as much as it would to draw to canvas.