I wonder if is there any way I can get pixel data for the currently drawn path in the canvas tag.
I can calculate the pixel data on my own when drawing simple shapes like square or a line, but things get messy with more complicated shapes like ellipse or even a simple circle.
The reason i'm asking this is because I'm working on a web application which involves sending canvas pixels data to the server when I add a path to the canvas. The server needs to keep it's own copy of the entire canvas, and I really don't want to send the ENTIRE canvas image every single change, but only the delta for efficiency reasons...
Thanks.
Step 1: Draw the paths to a second canvas of the same size as you original, let's call it the 'path canvas'.
Step 2: Set the globalCompositeOperation of the path canvas to 'destination-in'.
Step 3: draw you original canvas onto the path canvas
Step 4: Loop through all the pixels of the path canvas and store the pixels that are not transparent in whatever format you're sending them to the server.
Related
Trying to make a simple animation engine for an HTML5 game. I have a timeline along the top composed of frame previews and a wireframe editor with a stickman for testing on a bigger, bottom canvas.
When I switch frames, I use drawImage() to copy the bottom canvas to a small frame preview along the top.
Is there a way to save the bottom canvas' current state in an array (each item corresponding to each frame) so that I can use drawImage() to place them as onion skins?
Here's the project if you want to see it visually:
https://jsfiddle.net/incon/owrj7e5z/
(If you look at the project, it doesn't really work yet, it's a big work in progress and is very glitchy)
This is what I was trying, and it works for previews, but won't redraw as onion skins on the bottom canvas.
var framePreviews = [undefined,undefined,undefined....];
var c = document.getElementById('bottomCanvas');
var tContext = document.getElementById('timelineCanvas').getContext('2d');
framePreviews[simulation.currentFrame] = c; // save canvas in an array
tContext.drawImage(framePreviews[simulation.currentFrame], simulation.currentFrame*60, 0, 60, 60/c.width*c.height); // draw the canvas 60px wide in the timeline
Don't.
An image is really heavy, whatever the mean of saving it you'll choose, there will always at least be once the ImageBitmap of your full sized canvas that will need to be stored in your browser's memory, and in a few frames, you'll blow it out.
You don't do any heavy calculations in your app, so you don't need to store these as images.
Instead only save the drawing commands, and redo the drawings every time.
Now, for the ones who do have to save computationally heavy frames, then the best is to generate an ImageBitmap from your canvas, but once again, this should be used only if drawing this one frame takes more than 16ms.
Question:
Using WebGL, what is the most efficient way I can draw a bitmap on the offscreen buffer given that a raster operation must be performed for every pixel/fragment? Raster operations may vary between draw calls.
Context:
As part of the application I am developing, I have to develop a function used to copy a bitmap onto a destination buffer. The bitmap can be smaller than the canvas and is placed using offsets for the X and Y co-ordinates. Also, given a ROP code, a raster operation can be performed when copying the bitmap. The full list of ROPs can be found here.
In OpenGL ES 1.x, there was a function called glLogicOp() which would perform the required binary operations when drawing. However, since WebGL is based on OpenGL ES 2.0, this function does not exist. I cannot yet think of a straight forward way to solve this problem. I have a few ideas but I am not as yet confident they will offer optimal performance.
Possible solutions:
Perform the ROPs in Javascript by keeping a typed array with the colour data for the offscreen buffer. This however would rely on the CPU to compute the ROP for each pixel. On draw I would upload the offscreen data to a texture and draw it on the offscreen buffer using a simple shader.
Use the ping-pong texture technique described here. Having 2 textures I could sample from one and write on the other, however, I would be forced to draw the whole offscreen buffer on every draw call, even when the bitmap to be drawn covers only a small fraction of the canvas. A possible optimization would be to switch the textures only when an ROP dependent on the destination pixel is used.
Use readPixels() to read data from the offscreen buffer before a draw call requiring an ROP dependent on the destination pixel. The data of the pixels on which the bitmap will be drawn is read, uploaded to a texture and passed to the shader together with the bitmap itself to be used for sampling. The readPixels function however is said to be one of the slowest WebGL functions since it requires synchronization between the CPU and GPU.
EDIT
Using the ping post texture technique, I think I will have to draw the whole screen every time since, for every drawing, in order to sample the destination pixel correctly I would need a representation of the current offscreen (screen before drawing). Unless I draw the whole screen every time, both textures would only represent alternate drawings rather than the real offscreen image.
It is easier to show this visually but consider this example:
1) I draw a rectangle on the left half of the screen with the first draw call having tex1 as sampling source and tex2 as drawing destination.
2) I draw rectangle on the right half of the screen with the second draw call having tex2 as sampling source and tex1 as drawing destination.
3) When I try to draw a horizontal line across the whole screen on the next draw call, since tex1 (source) does not contain the rectangle drawn in the first draw call, sampling from the left part of the screen would result in wrong colors of destination pixels.
I'm working in image edition in JavaScript. I have to create mask with different tools (rectangle, brush, magic wand...) and save it to the database. To avoid sending all the pixels coordinates, I'd like to vectorize the mask and send only the vertex.
How can I get that vertex from the CANVAS context?
The canvas is a raster surface. If you are treating it as a vector surface then you will need to record all of the operations applied in a separate data structure and send that to the server.
You may find that using SVG makes more sense than canvas in your case.
I have already cropped an image in an irregular shape. I need to insert an image into the cropped part. I have used HTML5 and JavaScript (kinetic.js) to do this (see: http://imgur.com/Lyt3j). I have done the area plotting. I don't want shapes like rect, poly etc. I need a user-defined shape and should be cropped using mouse.
Can anyone please help me with it?
Take a look at the compositing settings of the 2d context. These allow you to use a path or an image to perform masking on a canvas. When you .fillPath the path you created above with a solid fillColor and context.globalCompositeOperation = 'destination-in', the path will not be drawn and only those parts of the image will remain which are covered by the interior of the path. The rest will be alpha-transparent. When you use the 'source-out' operation instead, you create a transparent "hole" in the canvas you draw the path to.
So when you have a canvas with the source image (the image you want to insert), a canvas with the destination image (the image you want to insert the other image into) and the path, there are thee ways to do it.
a) you draw the path to the source canvas with source-in, so you have a graphic in the correct shape on it. Then you set the composite operation back to source-over and then drawImage the source canvas onto the destination canvas. This will crop the image on the source canvas, so make sure to create a copy beforehand when you still need it.
b) you draw the path to the destination image with destination-out to erase the area enclosed by the path, set the composite operation to destination-atop and then drawImage the source image to the destination image, which will then be inserted "behind" the transparent parts of the destination. This variant is non-destructive for the source canvas. Remember to set the globalCompositeOperation back to source-over when you are finished, or other canvas operations might no longer do what you expect them to do.
c) Just like in b) you use destination-out to cut a hole into the destination canvas, but then you set the composite operation to the normal setting source-over and draw the destination onto the source. You now have the completed image on the source canvas.
I'm new to HTML5 and JavaScript, and I'm trying to use the canvas element to create a high(ish) quality output image.
The current purpose is to allow users to all their own images (jpeg, png, svg). this works like a charm. however when I render the image it's always a 32-bit png. how can I create a higher quality picture using JavaScript(preferably) ?
when I output the file, it always seems to keep the same resolution as the canvas, how can I change this using JavaScript(preferably)
Thanks in Advance guys, I looked around for a while but I couldn't find the answer to this :(
If you want to create a larger image with getImageData or toDataURL then you have to:
Create an in-memory canvas that is larger than your normal canvas
Redraw your entire scene onto your in-memory canvas. You will probably want to call ctx.scale(theScale, theScale) on your in-memory context in order to scale your images, but this heavily depends on how they were created in the first place (images? Canvas paths?)
Call getImageData or toDataURL on the in-memory canvas and not your
normal canvas
By in-memory canvas I mean:
var inMem = document.createElement('canvas');
// The larger size
inMem.width = blah;
inMem.height = blah;
Well firstly, when you draw the image to the canvas it's not a png quite yet, it's a simple raw bitmap where the canvas API works on as you call it's methods, then it's converted to a png in order for the browser to display it, and that's what you get when you use toDataURL. When using toDataURL you're able to choose which image format you want the output to be, I think jpeg and bmp are supported along with png. (Don't think of it as a png converted to another format, cause it's not)
And I don't know what exactly do you mean by higher quality by adding more bits to a pixel, you see 32 bits are enough for all RGBA channels to have a true color 8 bits depth giving you way more colors than the human eye can see at once. I know depending on the lighting and angle in which the user is exposed to your picture his perception of the colors may vary but not the quality of it which I'd say only depends on the resolution it has. Anyway the canvas was not designed to work with those deeper colors and honestly that much color information isn't even necessary on any kind of scene you could render on the canvas, that's only relevant for high definition movies and games made by big studios, also, even if you could use deep colors on the canvas it would really depend on the support of the user's videocard and screen which I think the majority of people doesn't have.
If you wish to add information not directly concerned to the color of each pixel but maybe on potencial transformations they could have you better create your own type capable of carrying the imageData acceptable by the canvas API, keeping it's 32-bit format, and the additional information on a corresponding array.
And yes, the output image has the same resolution as the canvas do but there are a couple of ways provided for you to resize your final composition. Just do as Simon Sarris said, create an offscreen canvas which resolution is the final resolution you want your image to be, then, you can either:
Resize the raster image by calling drawImage while providing the toDataURL generated image making use of the resizing parameters
Redraw your scene there, as Simon said, which will reduce quality loss if your composition contains shapes created through the canvas API and not only image files put together
In case you know the final resolution you want it to be beforehand then just set the width and height of the canvas to it, but the CSS width and height can be set differently in order for your canvas to fit in your webpage as you wish.