I have already cropped an image in an irregular shape. I need to insert an image into the cropped part. I have used HTML5 and JavaScript (kinetic.js) to do this (see: http://imgur.com/Lyt3j). I have done the area plotting. I don't want shapes like rect, poly etc. I need a user-defined shape and should be cropped using mouse.
Can anyone please help me with it?
Take a look at the compositing settings of the 2d context. These allow you to use a path or an image to perform masking on a canvas. When you .fillPath the path you created above with a solid fillColor and context.globalCompositeOperation = 'destination-in', the path will not be drawn and only those parts of the image will remain which are covered by the interior of the path. The rest will be alpha-transparent. When you use the 'source-out' operation instead, you create a transparent "hole" in the canvas you draw the path to.
So when you have a canvas with the source image (the image you want to insert), a canvas with the destination image (the image you want to insert the other image into) and the path, there are thee ways to do it.
a) you draw the path to the source canvas with source-in, so you have a graphic in the correct shape on it. Then you set the composite operation back to source-over and then drawImage the source canvas onto the destination canvas. This will crop the image on the source canvas, so make sure to create a copy beforehand when you still need it.
b) you draw the path to the destination image with destination-out to erase the area enclosed by the path, set the composite operation to destination-atop and then drawImage the source image to the destination image, which will then be inserted "behind" the transparent parts of the destination. This variant is non-destructive for the source canvas. Remember to set the globalCompositeOperation back to source-over when you are finished, or other canvas operations might no longer do what you expect them to do.
c) Just like in b) you use destination-out to cut a hole into the destination canvas, but then you set the composite operation to the normal setting source-over and draw the destination onto the source. You now have the completed image on the source canvas.
Related
There I am creating a stamp maker in HTML,Jquery and javascript. In the editor of my application image is adding to HTML canvas (simple) and also the text, I just create new line element and then append it to the canvas.But the problem is that I want my image to be in back of text. I googled it alot and also search stackoverflow I got the solution of creating multiple canvases but in last I have to download the canvas to file for user. There is the problem. And I want to export whole canvas along with text one and second image together.If I create seperate canvas for the text and another for image and give image one low zindex it would be fine but there will be one canvas to be exported as image.
Link to multiple layers in canvas html5 - canvas element - Multiple layers
I am hopping that we would come up with an idea of how to download both canvases as an image or find a method to take image to back of the canvas.
Any answer would be appreciated.
If you store the text, second image and first image in variables, you could just draw them on the canvas in the order you prefer. This means that, whenever there's some change in the image or text, you should clean the canvas and redraw everything.
Also, you may be interested in Compositing, since it allows you to decide where a new object is drawn in relation to the existing drawing (on top, behind, etc.)
I have a div with a background image, and 2 images (c1 et c2) on this background. c1 and c2 use Raphael JS class.
I would like to download a png image (or jpg) with my background image and c1 and c2.
I tried to use Canvas2Image on my main div, but i don't see c1 and c2 when i download image.
Do you have an idea ? A class to make a div screenshot with all components on this div ?
Thanks you all,
Why the "images" are missing
RaphaelJS draws with SVG so your c1,c2 drawings are not actually images -- they are SVG.
The problem is that SVG drawn onto html5 canvas triggers a security restriction that in turn stops exporting the canvas into an image (context.toDataURL is turned off by when the canvas is tainted).
This means Canvas2Image cannot display your c1,c2 image-svgs.
A workaround
First, draw your background on your main canvas.
context.drawImage(bkImage,0,0);
Second, convert your RaphaelJS drawings into a real images (not SVG) and draw them on the main canvas.
Export one of your your RaphaelJS drawings (either c1 or c2) as SVG. One library that exports to SVG is Raphael.Export
Convert that SVG (from#1) to html5 Canvas drawing commands. [See #3 for one way to convert svg to canvas commands].
Draw the commands (from#2) onto a separate html5 canvas. One library that both converts SVG to Canvas commands (==#2) and also draws them to Canvas (==#3) is canvg.
Create an imageObject from the separate canvas (from#3). You can use the Html5 Canvas native command canvas.toDataURL() to create a data url that can be used as an image.src for a new imageObject.
var c1 = new Image;
c1.src = secondCanvas.toDataURL();
Draw that real image onto your main canvas.
context.drawImage(c1,0,0);
Repeat #1-5 with c2.
Finally, now that you have security compliant images (not SVG), you can directly export your canvas content to an image (again, with context.toDataURL). You won't even need Canvas2Image if your div only contains the background image plus c1,c2.
A possible simpler alternative
SVG and Canvas drawing commands are extremely similar. If c1,c2 don't rely on non-drawing RaphaelJS methods you could fairly easily draw c1,c2 using native Canvas drawing commands (bypassing Raphael entirely).
A future alternative
Both SVG and Canvas have remarkably similar drawing abilities.
But SVG adds full DOM capabilities -- full html element events & properties. And Canvas focuses on rendering speed -- rendering hundreds of shapes at 60 frames per second.
The organization that recommends how browsers work (W3C) receives RFP's about making SVG & Canvas drawings compatible
However, SVG-Canvas interactivity is not currently available.
Question:
Using WebGL, what is the most efficient way I can draw a bitmap on the offscreen buffer given that a raster operation must be performed for every pixel/fragment? Raster operations may vary between draw calls.
Context:
As part of the application I am developing, I have to develop a function used to copy a bitmap onto a destination buffer. The bitmap can be smaller than the canvas and is placed using offsets for the X and Y co-ordinates. Also, given a ROP code, a raster operation can be performed when copying the bitmap. The full list of ROPs can be found here.
In OpenGL ES 1.x, there was a function called glLogicOp() which would perform the required binary operations when drawing. However, since WebGL is based on OpenGL ES 2.0, this function does not exist. I cannot yet think of a straight forward way to solve this problem. I have a few ideas but I am not as yet confident they will offer optimal performance.
Possible solutions:
Perform the ROPs in Javascript by keeping a typed array with the colour data for the offscreen buffer. This however would rely on the CPU to compute the ROP for each pixel. On draw I would upload the offscreen data to a texture and draw it on the offscreen buffer using a simple shader.
Use the ping-pong texture technique described here. Having 2 textures I could sample from one and write on the other, however, I would be forced to draw the whole offscreen buffer on every draw call, even when the bitmap to be drawn covers only a small fraction of the canvas. A possible optimization would be to switch the textures only when an ROP dependent on the destination pixel is used.
Use readPixels() to read data from the offscreen buffer before a draw call requiring an ROP dependent on the destination pixel. The data of the pixels on which the bitmap will be drawn is read, uploaded to a texture and passed to the shader together with the bitmap itself to be used for sampling. The readPixels function however is said to be one of the slowest WebGL functions since it requires synchronization between the CPU and GPU.
EDIT
Using the ping post texture technique, I think I will have to draw the whole screen every time since, for every drawing, in order to sample the destination pixel correctly I would need a representation of the current offscreen (screen before drawing). Unless I draw the whole screen every time, both textures would only represent alternate drawings rather than the real offscreen image.
It is easier to show this visually but consider this example:
1) I draw a rectangle on the left half of the screen with the first draw call having tex1 as sampling source and tex2 as drawing destination.
2) I draw rectangle on the right half of the screen with the second draw call having tex2 as sampling source and tex1 as drawing destination.
3) When I try to draw a horizontal line across the whole screen on the next draw call, since tex1 (source) does not contain the rectangle drawn in the first draw call, sampling from the left part of the screen would result in wrong colors of destination pixels.
I'm working on a 2d isometric map with HTML5 and canvas.
So, i can have a map with an infinity of tiles, but, I only displayed the tiles around the player and it's my problem.
When i retrieve the image with toDataURL(), I just retrieves the small part of the map that I displayed around the player...
I need to retrieve all the canvas, even that is not displayed, is it possible ?
No, it's not possible. toDataURL do give you the whole canvas but the whole canvas is simply what you see on the screen. Canvas does not know about anything outside this (anything drawn outside the canvas is clipped and forgotten).
If you need to get the complete map you will need to create a big enough canvas and draw the complete map then extract the image. There is no way around (using canvas).
I wonder if is there any way I can get pixel data for the currently drawn path in the canvas tag.
I can calculate the pixel data on my own when drawing simple shapes like square or a line, but things get messy with more complicated shapes like ellipse or even a simple circle.
The reason i'm asking this is because I'm working on a web application which involves sending canvas pixels data to the server when I add a path to the canvas. The server needs to keep it's own copy of the entire canvas, and I really don't want to send the ENTIRE canvas image every single change, but only the delta for efficiency reasons...
Thanks.
Step 1: Draw the paths to a second canvas of the same size as you original, let's call it the 'path canvas'.
Step 2: Set the globalCompositeOperation of the path canvas to 'destination-in'.
Step 3: draw you original canvas onto the path canvas
Step 4: Loop through all the pixels of the path canvas and store the pixels that are not transparent in whatever format you're sending them to the server.