So I have this piece of code that I use for erasing and restoring parts of an image with a (for example) removed background. Erasing from the main canvas is simple and the user can erase a circular shape with a line between points.
if(removeMode) {
ctxs[index].globalCompositeOperation = 'destination-out';
ctxs[index].beginPath();
ctxs[index].arc(x, y, radius, 0, 2 * Math.PI);
ctxs[index].fill();
ctxs[index].lineWidth = 2 * radius;
ctxs[index].beginPath();
ctxs[index].moveTo(old.x, old.y);
ctxs[index].lineTo(x, y);
ctxs[index].stroke();
}
The problem is with the restoring. Currently I am able to copy parts of the original image to the main canvas but only in a rectangular shape using the getImageData() and putImageData() functions.
ctxs[index].globalCompositeOperation = 'source-out';
ctxs[0].putImageData(ctxs[1].getImageData(x-radius, y-radius, 2*radius, 2*radius), x-radius, y-radius);
Ideally I would like to clip a part of the original image canvas to the main canvas with a shape similar to the erasing feature. I have tried the clip() function but honestly I am not sure how to go about it. Here is what I initially tried to clip a part of a canvas.
ctxs[index].beginPath();
ctxs[index].arc(x, y, radius, 0, Math.PI * 2);
ctxs[index].fill();
ctxs[index].lineWidth = 2 * radius;
ctxs[index].beginPath();
ctxs[index].moveTo(old.x, old.y);
ctxs[index].lineTo(x, y);
ctxs[index].stroke();
ctxs[index].clip();
How do I copy a custom shape from a canvas to another canvas?
Thanks in advance,
Edit:
I have also thought of using a mask where I would create the mask as such (example using numpy in python):
Y, X = np.ogrid[:canvas_height, :canvas_width]
# Y, X are matrix values and x, y are coordinates of the cursor within the image
center_dist = np.sqrt((X - x)**2 + (Y-y)**2)
# create mask
mask = center_dist <= radius
# omit everything except circular shape from mask
circular_img = original_img.copy()
circular_img[~mask] = 0
# combine images
new_img = np.maximum(original_img, new_img)
Example of what I have now
Simpler solution
Every shape fits into a rectangle.
Proof Your canvas is a rectangle and already contains the shape.
As a result, you can determine the smallest possible rectangle that contains the full shape and store that. It will necessarily contain your shape. Upon reload you will need to know the shape's boundaries inside the copy though, so that info will also be needed.
Harder, but more precise solution
You can create a structure and store the content, point-by-point (yet, this will be not very performant):
const data = context.getImageData(0, 0, canvas.width, canvas.height).data;
let myShape = [];
for (let x = 0; x < canvas.width; x++) {
for (let y = 0; y < canvas.height; y++) {
if (inShape(x, y, canvas)) {
myShape.push({x, y, content: data});
}
}
}
The snippet above assumes that you have properly implemented inShape.
Homogeneous shape
If all the points inside the shape are similar, then you will need to only know where the boundaries of the shape were. If you have a convex polygon, for example, then you will need to know where its center is and what the boundaries are. If you have a filled circle, then you will only need its center and radius. The geometrical data you need largely depend on what shape you have.
Keep using composite operations.
"destination-out" will indeed remove the previous pixels that do overlap with the new ones.
If you use the inverse "destination-in", only the previous pixels that do overlap with the new ones are kept.
So you store your original image intact, and then use one of these modes to render it given the action you want to perform.
Here since it seems we are in a paint-like configuration, I guess it makes more sense to erase the final result and restore the original canvas. For this we need a third canvas, detached where we'll draw the "restoration" part on its own before drawing that back to the visible canvas:
(async () => {
// the main, visible canvas
const canvas = document.querySelector("canvas");
canvas.width = 500;
canvas.height = 250;
const ctx = canvas.getContext("2d");
// a detached canvas context to do the compositing
const detached = canvas.cloneNode().getContext("2d");
// the "source" canvas (here just an ImageBitmap)
const originalCanvas = await loadImage();
ctx.lineWidth = detached.lineWidth = 8;
// we store every drawing in its own Path2D object
const paths = [];
let down = false;
const checkbox = document.querySelector("input");
canvas.onmousedown = (evt) => {
down = true;
const newPath = new Path2D();
newPath.isEraser = !checkbox.checked;
paths.push(newPath);
};
canvas.onmouseup = (evt) => { down = false; };
canvas.onmousemove = (evt) => {
if (!down) { return; }
const {x, y} = parseMouseEvent(evt);
paths[paths.length - 1].lineTo(x, y);
redraw();
};
redraw();
function redraw() {
// clear the visible context
ctx.globalCompositeOperation = "source-over";
ctx.drawImage(originalCanvas, 0, 0);
paths.forEach((path) => {
if (path.isEraser) {
// erase the current content
ctx.globalCompositeOperation = "destination-out";
ctx.stroke(path);
}
else {
// to restore
// we do the compositing on the detached canvas
detached.globalCompositeOperation = "source-over";
detached.drawImage(originalCanvas, 0, 0);
detached.globalCompositeOperation = "destination-in";
detached.stroke(path);
// draw the result on the main context
ctx.globalCompositeOperation = "source-over";
ctx.drawImage(detached.canvas, 0, 0);
}
});
}
})().catch(console.error);
async function loadImage() {
const url = "https://picsum.photos/500/250";
const req = await fetch(url);
const blob = req.ok && await req.blob();
return createImageBitmap(blob);
}
function parseMouseEvent(evt) {
const rect = evt.target.getBoundingClientRect();
return {x: evt.clientX - rect.left, y: evt.clientY - rect.top };
}
canvas { border: 1px solid; vertical-align: top }
<label>erase/restore <input type="checkbox"></label>
<canvas></canvas>
Note that here I do create new paths every time, but you could very well use the same ones for both erasing and restoring (and even any other graphic source).
Related
I'm working on some code I'm unfamiliar with, and trying to troubleshoot a strange bug. Essentially there are different sections to the canvas, and the goal is to calculate the percentage of the section that's filled. There are two different colors (red and yellow, with specific RGB and Hex values) that could be used to draw a line in the section, used like a brush stroke. When I draw over the entire section in a single color, the % of section filled shows correctly at 100%. However, if I go back over and draw a line of the other color into the section, a small number of pixels are neither of the specific RGB values and so the % turns to 99%. When I used an image viewer and blew up the image, I could see that the bulk of the drawn line is fine, but the outer edges are gradient-like that fade into the other color.
This is the code used to draw onto the canvas:
handleMouseMove() {
const canvas = this.props.canvasInfo;
const context = canvas.getContext('2d');
if (!(this.props.canvasObject && context && this.image)) {
setCanvasError(this.props.dispatch, Constants.CANVAS_GENERIC_ERROR);
return;
}
const { isDrawing, mode, brushSize, brushColor } = this.props.canvasObject;
if (isDrawing && mode && brushSize && brushColor) {
context.strokeStyle = brushColor;
context.lineJoin = 'round';
context.lineWidth = brushSize;
context.globalCompositeOperation = 'source-over';
if (mode === Constants.ERASER_MODE) {
context.strokeStyle = Constants.CANVAS_BACKGROUND_COLOR;
}
context.beginPath();
const localPos = {
x: this.lastPointerPosition.x - this.image.x(),
y: this.lastPointerPosition.y - this.image.y(),
};
context.moveTo(localPos.x, localPos.y);
const stage = this.image.getStage();
const pos = stage.getPointerPosition();
localPos.x = pos.x - this.image.x();
localPos.y = pos.y - this.image.y();
context.lineTo(localPos.x, localPos.y);
context.closePath();
context.stroke();
this.lastPointerPosition = pos;
this.image.getLayer().draw();
}
I'm not sure why the pixels aren't all just either of the two specified colors. I saw at html 5 canvas LineTo() line color issues that this could be antialiasing, but I'm not sure how to fix it.
I am creating a game using the HTML5 Canvas element, and as one of the visual effects I would like to create a glow (like a light) effect. Previously for glow effects I found solutions involving creating shadows of shapes, but these require a solid shape or object to cast the shadow. What I am looking for is a way to create something like an ambient light glow with a source location but no object at the position.
Something I have thought of was to define a centerpoint x and y and create hundreds of concentric circles, each 1px larger than the last and each with a very low opacity, so that together they create a solid center and a transparent edge. However, this is very computationally heavy and does not seem elegant at all, as the resulting glow looks awkward.
While this is all that I am asking of and I would be more than happy to stop here, bonus points if your solution is A) computationally light, B) modifiable to create a focused direction of light, or even better, C) if there was a way to create an "inverted" light system in which the entire screen is darkened by a mask and the shade is lifted where there is light.
I have done several searches, but none have turned up any particularly illuminating results.
So I'm not quite sure what you want, but I hope the following snippet will help.
Instead of creating a lot of concentric circles, create one radialGradient.
Then you can combine this radial gradient with some blending, and even filters to modify the effect as you wish.
var img = new Image();
img.onload = init;
img.src = "https://dev.w3.org/SVG/tools/svgweb/samples/svg-files/car.svg";
var ctx = c.getContext('2d');
var gradCtx = c.cloneNode().getContext('2d');
var w, h;
var ratio;
function init() {
w = c.width = gradCtx.canvas.width = img.width;
h = c.height = gradCtx.canvas.height = img.height;
draw(w / 2, h / 2)
updateGradient();
c.onmousemove = throttle(handleMouseMove);
}
function updateGradient() {
var grad = gradCtx.createRadialGradient(w / 2, h / 2, w / 8, w / 2, h / 2, 0);
grad.addColorStop(0, 'transparent');
grad.addColorStop(1, 'white');
gradCtx.fillStyle = grad;
gradCtx.filter = "blur(5px)";
gradCtx.fillRect(0, 0, w, h);
}
function handleMouseMove(evt) {
var rect = c.getBoundingClientRect();
var x = evt.clientX - rect.left;
var y = evt.clientY - rect.top;
draw(x, y);
}
function draw(x, y) {
ctx.clearRect(0, 0, w, h);
ctx.globalCompositeOperation = 'source-over';
ctx.drawImage(img, 0, 0);
ctx.globalCompositeOperation = 'destination-in';
ctx.drawImage(gradCtx.canvas, x - w / 2, y - h / 2);
ctx.globalCompositeOperation = 'lighten';
ctx.fillRect(0, 0, w, h);
}
function throttle(callback) {
var active = false; // a simple flag
var evt; // to keep track of the last event
var handler = function() { // fired only when screen has refreshed
active = false; // release our flag
callback(evt);
}
return function handleEvent(e) { // the actual event handler
evt = e; // save our event at each call
if (!active) { // only if we weren't already doing it
active = true; // raise the flag
requestAnimationFrame(handler); // wait for next screen refresh
};
}
}
<canvas id="c"></canvas>
How to draw outer and inner border around any canvas shape?
I'm drawing several stroke-only shapes on an html canvas, and I would like to draw an inner and outer border around them.
draft example:
Is there a generic why to do it for any shape (assuming it's a closed stroke-only shape)?
Two methods
There is no inbuilt way to do this and there are two programmatic ways that I use. The first is complicated and involves expanding and contracting the path then drawing along that path. This works for most situations but will fail in complex situation, and the solution has many variables and options to account for these complications and how to handle them.
The better of the two
The second and easiest way that I present below is by using the ctx.globalCompositeOperation setting to mask out what you want drawn or not. As the stroke is drawn along the center and the fill fills up to the center you can draw the stroke at twice the desired width and then either mask in or mask out the inner or outer part.
This does become problematic when you start to create very complex images as the masking (Global Composite Operation) will interfere with what has already been drawn.
To simplify the process you can create a second canvas the same size as the original as a scratch space. You can then draw the shape on he scratch canvas do the masking and then draw the scratch canvas onto the working one.
Though this method is not as fast as computing the expanded or shrunk path, it does not suffer from the ambiguities faced by moving points in the path. Nor does this method create the lines with the correct line join or mitering for the inside or outside edges, for that you must use a the other method. For most purposes the masking it is a good solution.
Below is a demo of the masking method to draw an inner or outer path. If you modify the mask by including drawing a stroke along with the fill you can also set an offset so that the outline or inline will be offset by a number of pixels. I have left that for you. (hint add stroke and set the line width to twice the offset distance when drawing the mask).
var demo = function(){
/** fullScreenCanvas.js begin **/
var canvas = ( function () {
canvas = document.getElementById("canv");
if(canvas !== null){
document.body.removeChild(canvas);
}
// creates a blank image with 2d context
canvas = document.createElement("canvas");
canvas.id = "canv";
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
canvas.style.position = "absolute";
canvas.style.top = "0px";
canvas.style.left = "0px";
canvas.style.zIndex = 1000;
canvas.ctx = canvas.getContext("2d");
document.body.appendChild(canvas);
return canvas;
})();
var ctx = canvas.ctx;
/** fullScreenCanvas.js end **/
/** CreateImage.js begin **/
// creates a blank image with 2d context
var createImage = function(w,h){
var image = document.createElement("canvas");
image.width = w;
image.height =h;
image.ctx = image.getContext("2d");
return image;
}
/** CreateImage.js end **/
// define a shape for demo
var shape = [0.1,0.1,0.9,0.1,0.5,0.5,0.8,0.9,0.1,0.9];
// draws the shape as a stroke
var strokeShape = function (ctx) {
var w, h, i;
w = canvas.width;
h = canvas.height;
ctx.beginPath();
ctx.moveTo(shape[0] *w, shape[1] *h)
for (i = 2; i < shape.length; i += 2) {
ctx.lineTo(shape[i] * w, shape[i + 1] * h);
}
ctx.closePath();
ctx.stroke();
}
// draws the shape as filled
var fillShape = function (ctx) {
var w, h, i;
w = canvas.width;
h = canvas.height;
ctx.beginPath();
ctx.moveTo(shape[0] * w,shape[1] * h)
for (i = 2; i < shape.length; i += 2) {
ctx.lineTo(shape[i]*w,shape[i+1]*h);
}
ctx.closePath();
ctx.fill();
}
var drawInOutStroke = function(width,style,where){
// clear the workspace
workCtx.ctx.globalCompositeOperation ="source-over";
workCtx.ctx.clearRect(0, 0, workCtx.width, workCtx.height);
// set the width to double
workCtx.ctx.lineWidth = width*2;
workCtx.ctx.strokeStyle = style;
// fill colour does not matter here as its not seen
workCtx.ctx.fillStyle = "white";
// can use any join type
workCtx.ctx.lineJoin = "round";
// draw the shape outline at double width
strokeShape(workCtx.ctx);
// set comp to in.
// in means leave only pixel that are both in the source and destination
if (where.toLowerCase() === "in") {
workCtx.ctx.globalCompositeOperation ="destination-in";
} else {
// out means only pixels on the destination that are not part of the source
workCtx.ctx.globalCompositeOperation ="destination-out";
}
fillShape(workCtx.ctx);
ctx.drawImage(workCtx, 0, 0);
}
// clear in case of resize
ctx.globalCompositeOperation ="source-over";
ctx.clearRect(0,0,canvas.width,canvas.height);
// create the workspace canvas
var workCtx = createImage(canvas.width, canvas.height);
// draw the outer stroke
drawInOutStroke((canvas.width + canvas.height) / 45, "black", "out");
// draw the inner stroke
drawInOutStroke((canvas.width + canvas.height) / 45, "red", "in");
// draw the shape outline just to highlight the effect
ctx.strokeStyle = "white";
ctx.lineJoin = "round";
ctx.lineWidth = (canvas.width + canvas.height) / 140;
strokeShape(ctx);
};
// run the demo
demo();
// incase fullscreen redraw it all
window.addEventListener("resize",demo)
If you use the rotation plugin in CamanJS there is an issue when you are trying to revert changes. Caman is only implemented in a way that is working good when you crop or resize your image, but not when you rotate it. When you revert and the image is rotated the image reloads distorted, because it doesn't take under consideration that the canvas has rotated and changed size. Also the imageData.data of the canvas are different now. So I think i fixxed it by looking how he implemented the resize. Basicaly what I did (and he does too) is:
Create a canvas in the initial state
Update his pixelData from the initialState
create a new canvas
Rotate him with the initial image
get the ImageData and rerender them
So what I added. I needed to know how many angles was the image rotated so I can get the correct imageData when rotate the new canvas (step 4).
this.angle=0; //added it in the constructor
I also added a new boolean in the constructor to tell me if canvas was rotated
this.rotated = false;
In the rotated plugin:
Caman.Plugin.register("rotate", function(degrees) {
//....
//....
//....
this.angle += degrees;
this.rotated = true;
return this.replaceCanvas(canvas);
}
and on the originalVisiblePixels prototype:
else if (this.rotated){
canvas = document.createElement('canvas');//Canvas for initial state
canvas.width = this.originalWidth; //give it the original width
canvas.height = this.originalHeight; //and original height
ctx = canvas.getContext('2d');
imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
pixelData = imageData.data;//get the pixelData (length equal to those of initial canvas
_ref = this.originalPixelData; //use it as a reference array
for (i = _i = 0, _len = _ref.length; _i < _len; i = ++_i) {
pixel = _ref[i];
pixelData[i] = pixel; //give pixelData the initial pixels
}
ctx.putImageData(imageData, 0, 0); //put it back on our canvas
rotatedCanvas = document.createElement('canvas'); //canvas to rotate from initial
rotatedCtx = rotatedCanvas.getContext('2d');
rotatedCanvas.width = this.canvas.width;//Our canvas was already rotated so it has been replaced. Caman's canvas attribute is allready rotated, So use that width
rotatedCanvas.height = this.canvas.height; //the same
x = rotatedCanvas.width / 2; //for translating
y = rotatedCanvas.width / 2; //same
rotatedCtx.save();
rotatedCtx.translate(x, y);
rotatedCtx.rotate(this.angle * Math.PI / 180); //rotation based on the total angle
rotatedCtx.drawImage(canvas, -canvas.width / 2, -canvas.height / 2, canvas.width, canvas.height); //put the image back on canvas
rotatedCtx.restore(); //restore it
pixelData = rotatedCtx.getImageData(0, 0, rotatedCanvas.width, rotatedCanvas.height).data; //get the pixelData back
width = rotatedCanvas.width; //used for returning the pixels in revert function
}
You also need to add some resets in the reset prototype function. Basicaly reset angle and rotated
Caman.prototype.reset = function() {
//....
//....
this.angle = 0;
this.rotated = false;
}
and that's it.
I used it and works so far. What do you think?Hope it helps
Thanks for this, it worked after one slight change.
in the else if statement inside the originalVisiblePixels prototype I changed:
x = rotatedCanvas.width / 2; //for translating
y = rotatedCanvas.width / 2; //same
to:
x = rotatedCanvas.width / 2; //for translating
y = rotatedCanvas.height/ 2; //same
before this change my images where being cut.
Is there any way to check if a selected (x,y) point of a PNG image is transparent?
Building on Jeff's answer, your first step would be to create a canvas representation of your PNG. The following creates an off-screen canvas that is the same width and height as your image and has the image drawn on it.
var img = document.getElementById('my-image');
var canvas = document.createElement('canvas');
canvas.width = img.width;
canvas.height = img.height;
canvas.getContext('2d').drawImage(img, 0, 0, img.width, img.height);
After that, when a user clicks, use event.offsetX and event.offsetY to get the position. This can then be used to acquire the pixel:
var pixelData = canvas.getContext('2d').getImageData(event.offsetX, event.offsetY, 1, 1).data;
Because you are only grabbing one pixel, pixelData is a four entry array containing the pixel's R, G, B, and A values. For alpha, anything less than 255 represents some level of transparency with 0 being fully transparent.
Here is a jsFiddle example: http://jsfiddle.net/thirtydot/9SEMf/869/ I used jQuery for convenience in all of this, but it is by no means required.
Note: getImageData falls under the browser's same-origin policy to prevent data leaks, meaning this technique will fail if you dirty the canvas with an image from another domain or (I believe, but some browsers may have solved this) SVG from any domain. This protects against cases where a site serves up a custom image asset for a logged in user and an attacker wants to read the image to get information. You can solve the problem by either serving the image from the same server or implementing Cross-origin resource sharing.
Canvas would be a great way to do this, as #pst said above. Check out this answer for a good example:
getPixel from HTML Canvas?
Some code that would serve you specifically as well:
var imgd = context.getImageData(x, y, width, height);
var pix = imgd.data;
for (var i = 0, n = pix.length; i < n; i += 4) {
console.log pix[i+3]
}
This will go row by row, so you'd need to convert that into an x,y and either convert the for loop to a direct check or run a conditional inside.
Reading your question again, it looks like you want to be able to get the point that the person clicks on. This can be done pretty easily with jquery's click event. Just run the above code inside a click handler as such:
$('el').click(function(e){
console.log(e.clientX, e.clientY)
}
Those should grab your x and y values.
The two previous answers demonstrate how to use Canvas and ImageData. I would like to propose an answer with runnable example and using an image processing framework, so you don't need to handle the pixel data manually.
MarvinJ provides the method image.getAlphaComponent(x,y) which simply returns the transparency value for the pixel in x,y coordinate. If this value is 0, pixel is totally transparent, values between 1 and 254 are transparency levels, finally 255 is opaque.
For demonstrating I've used the image below (300x300) with transparent background and two pixels at coordinates (0,0) and (150,150).
Console output:
(0,0): TRANSPARENT
(150,150): NOT_TRANSPARENT
image = new MarvinImage();
image.load("https://i.imgur.com/eLZVbQG.png", imageLoaded);
function imageLoaded(){
console.log("(0,0): "+(image.getAlphaComponent(0,0) > 0 ? "NOT_TRANSPARENT" : "TRANSPARENT"));
console.log("(150,150): "+(image.getAlphaComponent(150,150) > 0 ? "NOT_TRANSPARENT" : "TRANSPARENT"));
}
<script src="https://www.marvinj.org/releases/marvinj-0.7.js"></script>
Building on Brian Nickel's answer, only the wanted single pixel of the source image is drawn onto a 1*1 pixel canvas, which is more efficient than drawing the entire image just to get a single pixel:
function getPixel(img, x, y) {
let canvas = document.createElement('canvas');
canvas.width = 1;
canvas.height = 1;
canvas.getContext('2d').drawImage(img, x, y, 1, 1, 0, 0, 1, 1);;
let pixelData = canvas.getContext('2d').getImageData(0, 0, 1, 1).data;
return pixelData;
}
With : i << 2
const pixels = context.getImageData(x, y, width, height).data;
for (let i = 0, dx = 0; dx < data.length; i++, dx = i << 2)
{
if (pixels[dx+3] <= 8) { console.log("transparent x= " + i); }
}
Here's a consolidation of a few answers into a runnable snippet that lets you upload a file, hover to preview the RGB value of each pixel, then click to put the RGB in a div.
Pertinent to the original question, the last value (alpha) is the transparency. 0 is fully transparent and 255 is fully opaque.
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
const input = document
.querySelector('input[type="file"]');
input.addEventListener("change", e => {
const image = new Image();
image.addEventListener("load", e => {
const {width, height} = image;
canvas.width = width;
canvas.height = height;
ctx.drawImage(image, 0, 0);
const {data} = ctx.getImageData(
0, 0, width, height
);
const rgb = (x, y) => {
const i = (x + y * width) * 4;
return data.slice(i, i + 4).join(", ");
};
canvas.addEventListener("mousemove", event => {
const {offsetX: x, offsetY: y} = event;
console.log(rgb(x, y));
});
canvas.addEventListener("click", event => {
const {offsetX: x, offsetY: y} = event;
document.querySelector("div")
.textContent = rgb(x, y);
});
});
image.addEventListener("error", () =>
console.error("failed")
);
image.src = URL
.createObjectURL(event.target.files[0]);
});
.as-console-wrapper {
height: 21px !important;
}
<div>
Upload image and mouseover to preview RGB. Click to select a value.
</div>
<form>
<input type="file">
</form>
<canvas></canvas>
References:
HTML5 Canvas - How to get adjacent pixels position from the linearized imagedata Uint8ClampedArray?
How to upload image into HTML5 canvas