Trying to manipulate alpha on a canvas without get/putImageData? - javascript

Situation is basically draw several moving primitive shapes with a basic linear gradient so they overlap on a transparent blank canvas. Where they overlap the alpha channel value changes (ie, they bleed).
Final stage is rounding the alpha for each individual pixel, so it's either 0 or 255 depending on which it's closer to.
With imageData it's easy to do -
var ctxImage = ctx.getImageData(0, 0, ctx.canvas.width, ctx.canvas.height);
var ctxData = ctxImage.data;
for (var i = 0; i < ctxData.length; i += 4) {
ctxData[i + 3] = ctxData[i + 3] > 128 ? 255 : 0;
// or ctxData[i + 3] = Math.round(ctxData[i + 3] / 255) * 255;
}
ctx.putImageData(ctxImage, 0, 0);
As that getImageData is very expensive in CPU time, I was hoping to work out a solution that used globalCompositeOperation, but just can't seem any way to get it to work, any ideas?

There's no alternative way of snapping the alpha to 0 || 255.
Compositing lets either the source or destination pixel survive, but won't snap the alpha as you describe.

Related

D3.js performance issue, can scaleDivergingSymlog return RGB values instead of string?

I'm creating a diverging scale using D3.js with:
scaleDiverging = d3.scaleDivergingSymlog(d3.interpolateBrBG)
.domain([-0.1, 0, 0.1])
I can then call scaleDiverging(0.05) and I'll get back a value of 'rgb(87, 175, 165)'. This works great except that I need to fill a canvas image with RGB integers. So I have to parse those strings returned from scaleDiverging:
const parseRgb = color =>
color.substring(4, color.length - 1).split(', ')
.map(n => parseInt(n))
Easy solution, but this is the problematic part where my application spends most of its time and is the reason why my interactive graph is not smooth. You move a slider and 2 seconds later the image changes. If I replace parseRgb with a simple function that inaccurately returns back an array of numbers, the graph is smooth! I need the array of RGB values for Canvas' putImageData function:
const context = canvas.node().getContext('2d'),
imageData = context.getImageData(0, 0, width, height),
data = imageData.data
for (int i = 0; i < width * height * 4; i += 4) {
const color = parseRgb(scaleDiverging(calcVal(i)))
data[i] = color[0]
data[i + 1] = color[1]
data[i + 2] = color[2]
data[i + 3] = 255
}
context.putImageData(imageData, 0, 0)
Can I get the scaleDivergingSymlog functionality from D3.js where I don't have to parse the resulting string? I know I can look up the code and implement it myself, but, in addition to avoiding that work, I'd like to know how to properly use D3.js in the future for interactive graphs.
Someone else on stackoverflow had a similar issue and the only answer was to use d3.color. This is essentially the same as my parseRgb function above, but has even worse performance :(.

Mirroring right half of webcam image

I saw that you have helped David with his mirroring canvas problem before. Canvas - flip half the image
I have a similar problem and hope that maybe you could help me.
I want to apply the same mirror effect on my webcam-canvas, but instead of the left side, I want to take the RIGHT half of the image, flip it and apply it to the LEFT.
This is the code you've posted for David. It also works for my webcam cancas. Now I tried to change it, so that it works for the other side, but unfortunately I'm not able to get it.
for(var y = 0; y < height; y++) {
for(var x = 0; x < width / 2; x++) { // divide by 2 to only loop through the left half of the image.
var offset = ((width* y) + x) * 4; // Pixel origin
// Get pixel
var r = data[offset];
var g = data[offset + 1];
var b = data[offset + 2];
var a = data[offset + 3];
// Calculate how far to the right the mirrored pixel is
var mirrorOffset = (width - (x * 2)) * 4;
// Get set mirrored pixel's colours
data[offset + mirrorOffset] = r;
data[offset + 1 + mirrorOffset] = g;
data[offset + 2 + mirrorOffset] = b;
data[offset + 3 + mirrorOffset] = a;
}
}
Even if the accepted answer you're relying on uses imageData, there's absolutely no use for that.
Canvas allows, with drawImage and its transform (scale, rotate, translate), to perform many operations, one of them being to safely copy the canvas on itself.
Advantages is that it will be way easier AND way way faster than handling the image by its rgb components.
I'll let you read the code below, hopefully it's commented and clear enough.
The fiddle is here :
http://jsbin.com/betufeha/2/edit?js,output
One output example - i took also a mountain, a Canadian one :-) - :
Original :
Output :
html
<canvas id='cv'></canvas>
javascript
var mountain = new Image() ;
mountain.onload = drawMe;
mountain.src = 'http://www.hdwallpapers.in/walls/brooks_mountain_range_alaska-normal.jpg';
function drawMe() {
var cv=document.getElementById('cv');
// set the width/height same as image.
cv.width=mountain.width;
cv.height = mountain.height;
var ctx=cv.getContext('2d');
// first copy the whole image.
ctx.drawImage(mountain, 0, 0);
// save to avoid messing up context.
ctx.save();
// translate to the middle of the left part of the canvas = 1/4th of the image.
ctx.translate(cv.width/4, 0);
// flip the x coordinates to have a mirror effect
ctx.scale(-1,1);
// copy the right part on the left part.
ctx.drawImage(cv,
/*source */ cv.width/2,0,cv.width/2, cv.height,
/*destination*/ -cv.width/4, 0, cv.width/2, cv.height);
// restore context
ctx.restore();
}

html5 how to stroke with the inverted background color?

I want to do a selector tool in HTML5. As the user click and moves the mouse, a rectangle should be drawn from the starting point to the mouse pointer. This is usually done choosing a logical operator for the color of the stroke (so the lines are drawn as the inverse of the background), but it looks like I can only do this in HTML5 manipulating the image pixel by pixel. Wouldn't it be too slow? Is there a direct way to do that?
You can optimize your "inverted stroke" to be fast enough for "live" strokes.
Layer 2 canvases, one on top of another.
Draw the inverted image on the bottom canvas.
Draw the normal image on the top canvas.
Draw your strokes on the top canvas with context.globalCompositeOperation="destination-out".
"destination-out" compositing causes the top canvas pixels to be made transparent where the stroking occurs.
This "reveals" the inverted image below only where the strokes are.
This reveal is very fast since the GPU assists in compositing.
You can do this two ways:
Inverse region in-place and reset by re-drawing image
Keep an inverted version of image in a second canvas and draw clipped version
Option 1
For option 1 you would need to iterate the area each time using JavaScript which will be a relative slow operation compared to option 2.
Example of option 1:
// inside the mouse move code (please see live demo below)
ctx.drawImage(img, 0, 0); // update image (clear prev inv)
//invert the region
if (w !== 0 && h !== 0) {
var idata = ctx.getImageData(x1, y1, w, h), // get buffer
buffer = idata.data,
len = buffer.length,
i = 0;
for(; i < len; i += 4) { // inverse it
buffer[i] = 255 - buffer[i];
buffer[i+1] = 255 - buffer[i+1];
buffer[i+2] = 255 - buffer[i+2];
}
ctx.putImageData(idata, x1, y1);
}
As you can see the performance is not bad at all using just JavaScript and the CPU. The performance threshold is dependent on size of region though.
Live demo of option 1
Option 2
In option 2 you pre-invert the image in the second canvas and draw a clipped version of it to main canvas. This means from that point on you will get the benefit from using the GPU (if available) and performance will be much better.
Example of option 2:
// have a second canvas setup
var ocanvas = document.createElement('canvas'),
octx = ocanvas.getContext('2d'),
//invert the off-screen canvas
var idata = octx.getImageData(0, 0, ocanvas.width, ocanvas.height),
buffer = idata.data,
len = buffer.length,
i = 0;
for(; i < len; i += 4) {
buffer[i] = 255 - buffer[i];
buffer[i+1] = 255 - buffer[i+1];
buffer[i+2] = 255 - buffer[i+2];
}
octx.putImageData(idata, 0, 0);
Now it's a matter of getting the region of that area and update with a clipped version:
if (w !== 0 && h !== 0)
ctx.drawImage(ocanvas, x1, y1, w, h, x1, y1, w, h);
Live demo of option 2

ColorPicker implementation using JavaScript and Canvas

I'm trying to implement ColorPicker using Canvas just for fun. But i seem lost. as my browser is freezing for a while when it loads due to all these for loops.
I'm adding the screenshot of the result of this script:
window.onload = function(){
colorPicker();
}
function colorPicker(){
var canvas = document.getElementById("colDisp"),
frame = canvas.getContext("2d");
var r=0,
g=0,
b= 0;
function drawColor(){
for(r=0;r<255;r++){
for(g=0;g<255;g++){
for(b=0;b<255;b++){
frame.fillStyle="rgb("+r+","+g+","+b+")";
frame.fillRect(r,g,1,1);
}
}
}
}
drawColor();
Currently , i only want a solution about the freezing problem with better algorithm and it's not displaying the BLACK and GREY colors.
Please someone help me.
Instead of calling fillRect for every single pixel, it might be a lot more efficient to work with a raw RGBA buffer. You can obtain one using context.getImageData, fill it with the color values, and then put it back in one go using context.putImageData.
Note that your current code overwrites each single pixel 255 times, once for each possible blue-value. The final pass on each pixel is 255 blue, so you see no grey and black in the output.
Finding a good way to map all possible RGB values to a two-dimensional image isn't trivial, because RGB is a three-dimensional color-space. There are a lot of strategies for doing so, but none is really optimal for any possible use-case. You can find some creative solutions for this problem on AllRGB.com. A few of them might be suitable for a color-picker for some use-cases.
If you want to fetch the rgba of the pixel under the mouse, you must use context.getImageData.
getImageData returns an array of pixels.
var pixeldata=context.getImageData(0,0,canvas.width,canvas.height);
Each pixel is defined by 4 sequential array elements.
So if you have gotten a pixel array with getImageData:
// first pixel defined by the first 4 pixel array elements
pixeldata[0] = red component of pixel#1
pixeldata[1] = green component of pixel#1
pixeldata[2] = blue component of pixel#1
pixeldata[4] = alpha (opacity) component of pixel#1
// second pixel defined by the next 4 pixel array elements
pixeldata[5] = red component of pixel#2
pixeldata[6] = green component of pixel#2
pixeldata[7] = blue component of pixel#2
pixeldata[8] = alpha (opacity) component of pixel#2
So if you have a mouseX and mouseY then you can get the r,g,b,a values under the mouse like this:
// get the offset in the array where mouseX,mouseY begin
var offset=(imageWidth*mouseY+mouseX)*4;
// read the red,blue,green and alpha values of that pixel
var red = pixeldata[offset];
var green = pixeldata[offset+1];
var blue = pixeldata[offset+2];
var alpha = pixeldata[offset+3];
Here's a demo that draws a colorwheel on the canvas and displays the RGBA under the mouse:
http://jsfiddle.net/m1erickson/94BAQ/
A way to go, using .createImageData():
window.onload = function() {
var canvas = document.getElementById("colDisp");
var frame = canvas.getContext("2d");
var width = canvas.width;
var height = canvas.height;
var imagedata = frame.createImageData(width, height);
var index, x, y;
for (x = 0; x < width; x++) {
for (y = 0; y < height; y++) {
index = (x * width + y) * 4;
imagedata.data[index + 0] = x;
imagedata.data[index + 1] = y;
imagedata.data[index + 2] = x + y - 255;
imagedata.data[index + 3] = 255;
}
}
frame.putImageData(imagedata, 0, 0);
};
http://codepen.io/anon/pen/vGcaF

How to improve speed when manually adding a border around non-alpha pixels in canvas

I've written a script which takes an image like this one (normally the black is alpha):
...and adds a border of any color you'd like:
However it's not very fast. It takes about 130ms to create the border layer as a canvas for this tiny font. Bigger fonts take much longer!
The logic is simple:
/* This is more or less psuedo-code. */
// Blank image data where I will put the border.
var newData = newContext.getImageData(0, 0, canvas.width, canvas.height);
// The image I will be analyzing.
var oldData = oldContext.getImageData(0, 0, this.data.width, this.data.height);
// Loop through every pixel in oldData and remember where non-alpha pixels are.
var fontPixels = this._getNonAlphaPixels(oldData);
// Loop through relevant pixels, remember neighboring pixels, and add border.
for (var px in fontPixels) {
for (var py in fontPixels[px]) {
var borderPixels = this._getBorderPixels(px, py);
for (var bx in borderPixels) {
for (var by in borderPixels[bx]) {
if (typeof fontPixels[bx] !== 'undefined' &&
typeof fontPixels[bx][by] !== 'undefined')
{
continue; // Do not draw borders inside of font.
}
newData.data[((newData.width * by) + bx) * 4] = color.red;
newData.data[((newData.width * by) + bx) * 4 + 1] = color.green;
newData.data[((newData.width * by) + bx) * 4 + 2] = color.blue;
newData.data[((newData.width * by) + bx) * 4 + 3] = 255; //alpha
}
}
}
}
Basically I'm wondering: does someone know an alternative method which does not require pixel-by-pixel manipulation? Or perhaps is there a significant optimization that can be made to the above logic?
I should mention that _getNonAlphaPixels's execusion time is negligible. And _getBorderPixels's execution time is only is 17% of the total time.
EDIT
The below selected answer works wonderfully. The only significant difference between my solution and the one below is that whenever text is drawn, I draw an image instead (of a font).
Thanks Ken.
You can do this in several way.
Technique 1
One is to use the built-in strokeText function with draws the outline of a text. Setting lineWidth will determine the thickness of the border. However, the result is not always satisfying:
ctx.strokeStyle = color;
ctx.font = font;
ctx.lineWidth = 2;
ctx.strokeText(txt, x, y);
Results in:
TEXT WITH BORDER DEMO 1
Text and canvas is currently not so accurate at a sub-pixel level which has to do with how the font-hinting is used (or rather is not used), anti-aliasing and other aspects.
Technique 2
In any case, you can achieve a much better result by manually drawing the text in a "circle" to create the border:
var thick = 2;
ctx.fillStyle = color;
ctx.font = font;
ctx.fillText(txt, x - thick, y - thick);
ctx.fillText(txt, x, y - thick);
ctx.fillText(txt, x + thick, y - thick);
ctx.fillText(txt, x + thick, y);
ctx.fillText(txt, x + thick, y + thick);
ctx.fillText(txt, x, y + thick);
ctx.fillText(txt, x - thick, y + thick);
ctx.fillText(txt, x - thick, y);
ctx.fillStyle = '#fff';
ctx.fillText(txt, x, y);
The result is much better as seen here:
TEXT WITH BORDER DEMO 2
Technique 3
The drawback with this last technique is that we are asking canvas to render the text 9 times - that is waste of time - in theory... (see results).
To improve this we can at least reduce the times we draw text to two by caching the border text once as an image and use that to draw the border, then draw the final text on top.
Here octx represents an off-screen canvas context (c the off-screen canvas it self) to which we draw the text we'll use for border. Then we replace the circular fillText with drawImage. Notice we set baseline to top to easier get control where the text will end up .
octx.textBaseline = ctx.textBaseline = 'top';
octx.fillStyle = color;
octx.font = ctx.font = font;
octx.fillText(txt, 0, 0);
ctx.drawImage(c, x - thick, y - thick);
ctx.drawImage(c, x, y - thick);
ctx.drawImage(c, x + thick, y - thick);
ctx.drawImage(c, x + thick, y);
ctx.drawImage(c, x + thick, y + thick);
ctx.drawImage(c, x, y + thick);
ctx.drawImage(c, x - thick, y + thick);
ctx.drawImage(c, x - thick, y);
ctx.fillStyle = '#fff';
ctx.fillText(txt, x, y);
The image result will be the same as the previous:
TEXT WITH BORDER DEMO 3
Technique 4
Note that if you want even thicker borders you will probably want to consider actually to do a circular draw - literally - by using cos/sin etc. The reason is because at higher offsets the borders will start to come apart:
Instead of adding a bunch of draws you can instead use Cos/Sin calculation to draw the text in a literal circle:
function drawBorderText(txt, x, y, font, color) {
var thick = 7,
segments = 4, /// number of segments to divide the circle in
angle = 0, /// start angle
part, /// degrees per segment, see below
i = 0, d2r = Math.PI / 180;
/// determine how many parts are needed. I just
/// started with some numbers in this demo.. adjust as needed
if (thick > 1) segments = 6;
if (thick > 2) segments = 8;
if (thick > 4) segments = 12;
part = 360 / segments;
ctx.fillStyle = color;
ctx.font = font;
/// draw the text in a circle
for(;i < segments; i++) {
ctx.fillText(txt, x + thick * Math.cos(angle * d2r),
y + thick * Math.sin(angle * d2r));
angle += part;
}
ctx.fillStyle = '#fff';
ctx.fillText(txt, x, y);
}
Note that in this case you might have to draw two rounds as for small points they won't have a solid center (see for instance the dot over the i).
A bit crude in this demo, but for example's sake. You can fine-adjust by setting different thresholds for segments as well as add an "inner round" where text contains like here small details (i).
TEXT WITH BORDER DEMO 4
Results
Note that result will depend on various factors:
Font geometry itself (incl. font hinting).
Browser implementation of text rendering and its optimizations
CPU
Hardware acceleration
For example, on a Atom single core based computer without hardware acceleration I get 16ms for both demo 2 and 3 in Firefox (Aurora) (sometimes the double for the text version).
In Chrome (Canary) on the same computer the text based one uses 1-3 ms while the cached uses around 5 ms.
The sin/cos approach takes about 8-11 ms on a slow computer (achieved 5 ms a couple of times - JSFiddle is not the best place to test performance).
I don't have access to other hardware at the moment to test (and the margins are very small here and I'm not sure JavaScript will be able to pick it up, which I believe is the case with Firefox in particular) but at least you will have in any case a great increase compared to using manual pixel manipulation.

Categories

Resources