WebGL readPixels returns flipped y axis - javascript

I have no idea why but the image that I read from canvas gets flipped on y axis.
The ultimate goal is the read a portion of WebGL canvas and extract it as JPG/PNG.
Workflow is the following:
gl.readPixels
create 2D canvas
load Uint8Array pixels to 2D canvas as imageData
get 2D canvas as blob
create object URL
use it as image src
Here's my code: https://jsitor.com/acM-2WTzd
I'm really sorry about the length (almost 300) but it's WebGL, there's so much boilerplate and setup.
I've tried to debug it for several hours and I have no idea (granted it could be the shader, I'm quite new at that).
If you have any additional question, please feel free to ask!

Unlike context.getImageData(), gl.readPixels() reads pixel data starting from the bottom-left corner, not from the top-left corner. You can apply a transformation on tempCanvas and draw it onto itself after putting the image data like this:
context.putImageData(imageData, 0, 0);
// add the following
context.translate(0, cropHeight);
context.scale(1, -1);
context.drawImage(tempCanvas, 0, 0);
Alternatively, you can manually rearrange the pixel data before returning it from your getPixels() function:
function getPixels(x, y, width, height) {
const length = width * height * 4;
const row = width * 4;
const end = (height - 1) * row;
const arr = new Uint8Array(length);
const pixels = new Uint8Array(length);
if (draw) draw();
gl.readPixels(x, y, width, height, gl.RGBA, gl.UNSIGNED_BYTE, arr);
for (let i = 0; i < length; i += row) {
pixels.set(arr.subarray(i, i + row), end - i);
}
return pixels;
}

Related

Javascript Canvas issue: Why do my points on the canvas not correspond properly to the height of the graph?

I'm trying to make a line graph using the canvas that looks like a typical line graph and uses typical Cartesian coordinates like we learned in algebra;
starts with 0,0 at the bottom left, and the position x-axis is to be determined by the number of items to chart.
However, the position of the points doesn't match the input (although the shape of the graph is correct, indicating I'm doing something right). What am I doing wrong?
I've rewritten and tweaked the formula for converting numerous times
function newLineGraph(parent, width, height, dataArray) {
//this makes the element using my own code, no observable error here
var canvas = newCanvas(parent, width, height);
var canvasContext = canvas.getContext("2d");
var spaceBetweenEntries = width / dataArray.length;
var largestNumber = findHighestNumber(dataArray);
canvasContext.beginPath();
canvasContext.moveTo(0, 0);
var n = 0;
while (dataArray[n]) {
var x = spaceBetweenEntries * n;
var y = height - dataArray[n];
console.log("x,y", x, y);
canvasContext.lineTo(x, y);
n++;
}
canvasContext.stroke();
return canvas;
}
edit: fixed the image so you can see the canvas size
The resulting graph is much smaller than the intended graph; for example
newLineGraph("body",55,45,[1,40,10]);
produces a graph with a small ^ shape in the corner, rather than properly starting at the bottom. However, the console logs show " 0 44" "18.333333333333332 5","36.666666666666664 35" which I believe should produce a graph that fits the whole chart nicely.
The first lineTo will always have x as 0 so I assume the first line isn't drawing like you intended. It is more like a |/\ shape instead of \/\.
Set x like this:
var x = spaceBetweenEntries * (n + 1);
Edit
As you can see in this fiddle your chart renders at the right points with the coordinates you posted. I implemented the newCanvas function like I expect it to behave. So are we missing some other code that modifies the canvas width and height?
function newCanvas(parent, width, height) {
const canvas = document.createElement('canvas');
canvas.width = width;
canvas.height = height;
document.querySelector(parent).appendChild(canvas);
return canvas;
}
The problem was using style.width and style.height to modify the canvas height, instead of canvas.height and canvas.width

WebGL glfx.js matrix transform (perspective) crops the image if it rotates

I am using the glfx.js library in order to use matrix transformation to create the perspective effect for my images. In my app, the system works just like photoshop's smart objects (where you render a flat image and get perspective results after render)
glfx.js uses this function canvas.perspective(before, after) to apply matrix transforms to images, by assigning before and after coordination of the 4 points in an image, and it runs the Matrix command in the background to transform my image.
My issue is that if the resulting image that I want after the transformation applied to it is bigger than the original image (happens if you rotate the image) then the WebGL canvas is going to crop my image.
Look at the following fiddle:
https://jsfiddle.net/human_a/o4yrheeq/
window.onload = function() {
try {
var canvas = fx.canvas();
} catch (e) {
alert(e);
return;
}
// convert the image to a texture
var image = document.getElementById('image');
var texture = canvas.texture(image);
// apply the perspective filter
canvas.draw(texture).perspective( [0,0,774,0,0,1094,774,1094], [0,389,537,0,732,1034,1269,557] ).update();
image.src = canvas.toDataURL('image/png');
// or even if you replace the image with the canvas
// image.parentNode.insertBefore(canvas, image);
// image.parentNode.removeChild(image);
};
<script src="https://evanw.github.io/glfx.js/glfx.js"></script>
<img id="image" crossOrigin="anonymous" src="https://images.unsplash.com/photo-1485207801406-48c5ac7286b2?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=600&fit=max&s=9bb1a18da78ab0980d5e7870a236af88">
Any ideas on how we can make the WebGL canvas fit the rotated image (not make the image smaller) or somehow extract the whole image instead of the cropped one?
More pixels
There is no cover all solution. This is because when you convert from 2D to 3D the size of the projected image can possibly approch infinity (near clipping prevents infinity) so no matter how large you make the image output there is always the possibility of some clipping being applied.
With that caveat out of the way there is a solution for most situations that can avoid clipping. It is very simple, just expand the canvas to hold the additional content.
Find the bounds
To simplify the calculations I have changed the after array to a set of normalised points (they represent the after coords as a scale factor of the image size). I then use the image size to convert to real pixel coordinates. Then from that I workout the min size a texture needs to be to hold both the original image and the projection.
With that info I just create the texture (as a canvas) draw the image. Adjust the befor array if needed (in case some projection points are in negative space) and apply the filter.
So we have an image object that has a width and a height. And you have the projection of those points.
// assuming image has been loaded and is ready
var imgW = image.naturalWidth;
var imgH = image.naturalHeight;
The set the corner array (before)
var before = [0, 0, imgW, 0, 0, imgH, imgW, imgH];
The projection points. To make it easier to deal with I have normalised the projection points to the image size
var projectNorm = [[0, 0.3556], [0.6938, 0], [0.9457, 0.9452], [1.6395, 0.5091]];
If you want to use the absolute coordinates as in the fiddle's after array use the following code. The normalisation is reversed in the snippet after then next, so you can skip the normalisation. I have just updated the answer quickly as I am short of time.
var afterArray = [0,389,537,0,732,1034,1269,557];
projectNorm = [];
for(var i = 0; i < afterArray.length; i+= 2){
afterArray.push([afterArray[i] / before[i], afterArray[i + 1] / before[i + 1]]);
}
Now calculate the size of the projection. This is the important part as it works out the size of the canvas.
var top, left, right, bottom;
top = 0;
left = 0;
bottom = imgH;
right = imgW;
var project = projectNorm.map(p => [p[0] * imgW, p[1] * imgH]);
project.forEach(p => {
top = Math.min(p[1], top);
left = Math.min(p[0], left);
bottom = Math.max(p[1], bottom);
right = Math.max(p[0], right);
});
Now that all the data we need has been gathered we can create a new image that will accommodate the projection. (assuming that the projection points are true to the projection)
var texture = document.createElement("canvas");
var ctx = texture.getContext("2d");
texture.width = Math.ceil(right - left);
texture.height = Math.ceil(bottom - top);
Draw the image at 0,0
ctx.setTransform(1, 0, 0, 1, left, top); // put origin so image is at 0,0
ctx.drawImage(image,0,0);
ctx.setTransform(1, 0, 0, 1, 0, 0); // reset transform
Then flatten the projection point array
var after = [];
project.forEach(p => after.push(...p));
Move all points into positive projection space
after.forEach((p,i) => {
if (i % 2) {
before[i] += -top;
after[i] += -top;
} else {
before[i] += -left;
after[i] += -left;
}
});
The final step is to create the glfx.js objects and apply the filter
// create a fx canvas
var canvas = fx.canvas();
// create the texture
var glfxTexture = canvas.texture(texture);
// apply the filter
canvas.draw(glfxTexture).perspective( before, after ).update();
// show the result on the page
document.body.appendChild(canvas);
Demo
Demo of your snippet using the above method (slight modification for image load)
// To save time typing I have just kludged a simple load image wait poll
waitForLoaded();
function waitForLoaded(){
if(image.complete){
projectImage(image);
}else{
setTimeout(waitForLoaded,500);
}
}
function projectImage(image){
var imgW = image.naturalWidth;
var imgH = image.naturalHeight;
var projectNorm = [[0, 0.3556], [0.6938, 0], [0.9457, 0.9452], [1.6395, 0.5091]];
var before = [0, 0, imgW, 0, 0, imgH, imgW, imgH];
var top, left, right, bottom;
top = 0;
left = 0;
bottom = imgH;
right = imgW;
var project = projectNorm.map(p => [p[0] * imgW, p[1] * imgH]);
project.forEach(p => {
top = Math.min(p[1], top);
left = Math.min(p[0], left);
bottom = Math.max(p[1], bottom);
right = Math.max(p[0], right);
});
var texture = document.createElement("canvas");
var ctx = texture.getContext("2d");
texture.width = Math.ceil(right - left);
texture.height = Math.ceil(bottom - top);
ctx.setTransform(1, 0, 0, 1, left, top); // put origin so image is at 0,0
ctx.drawImage(image,0,0);
ctx.setTransform(1, 0, 0, 1, 0, 0); // reset transform
var after = [];
project.forEach(p => after.push(...p));
after.forEach((p,i) => {
if (i % 2) {
before[i] += -top;
after[i] += -top;
} else {
before[i] += -left;
after[i] += -left;
}
});
// create a fx canvas
var canvas = fx.canvas();
// create the texture
var glfxTexture = canvas.texture(texture);
// apply the filter
canvas.draw(glfxTexture).perspective( before, after ).update();
// show the result on the page
document.body.appendChild(canvas);
}
#image {
display : none;
}
<script src="https://evanw.github.io/glfx.js/glfx.js"></script>
<img id="image" crossOrigin="anonymous" src="https://images.unsplash.com/photo-1485207801406-48c5ac7286b2?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&s=9bb1a18da78ab0980d5e7870a236af88">
Notes and a warning
Note that the projection points (after array) do not always match the final corner points of the projected image. If this happens the final image may be clipped.
Note This method only works if the before points represent the exterme corners of the original image. If the points (before) are inside the image then this method may fail.
Warning There is no vetting of the resulting image size. Large Images can cause the browser to become sluggish, and sometimes crash. For production code you should do your best to keep the image size within the limits of the device that is using your code. Clients seldom return to pages that are slow and/or crash.

JavaScript canvas, manually cloning a canvas onto another generates a weird pattern

I'm trying to make a text effect similar to the effect found at the bottom of this article
My proposed approach is:
Make two canvasses, one is visible, the other is invisible I use this as a buffer.
Draw some text on the buffer canvas
Loop over getImageData pixels
if pixel alpha is not equal to zero (when there is a pixel drawn on the canvas buffer) with a small chance, ie 2%, draw a randomly generated circle with cool effecs at that pixel on the visible canvas.
I'm having trouble at step 4. With the code below, I'm trying to replicate the text on the second canvas, in full red. Instead I get this weird picture.
code
// create the canvas to replicate the buffer text on.
var draw = new Drawing(true);
var bufferText = function (size, textFont) {
// set the font to Georgia if it isn't defined
textFont = textFont || "Georgia";
// create a new canvas buffer, true means that it's visible on the screen
// Note, Drawing is a small library I wrote, it's just a wrapper over the canvas API
// it creates a new canvas and adds some functions to the context
// it doesn't change any of the original functions
var buffer = new Drawing(true);
// context is just a small wrapper library I wrote to make the canvas API a little more bearable.
with (buffer.context) {
font = util.format("{size}px {font}", {size: size, font: textFont});
fillText("Hi there", 0, size);
}
// get the imagedata and store the actual pixels array in data
var imageData = buffer.context.getImageData(0, 0, buffer.canvas.width, buffer.canvas.height);
var data = imageData.data;
var index, alpha, x, y;
// loop over the pixels
for (x = 0; x < imageData.width; x++) {
for (y = 0; y < imageData.height; y++) {
index = x * y * 4;
alpha = data[index + 3];
// if the alpha is not equal to 0, draw a red pixel at (x, y)
if (alpha !== 0) {
with (draw.context) {
dot(x/4, y/4, {fillColor: "red"})
}
}
}
}
};
bufferText(20);
Note that here, my buffer is actually visible to show where the red pixels are supposed to go compared to where they actually go.
I'm really confused by this problem.
If anybody knows an alternative approach, that's very welcome too.
replace this...
index = x * y * 4;
with...
index = (imageData.width * y) + x;
the rest is good :)

JavaScript canvas image data manipulation

I want to resize image using very simple algorithm. I have something like this:
var offtx = document.createElement('canvas').getContext('2d');
offtx.drawImage(imageSource, offsetX, offsetY, width, height, 0, 0, width, height);
this.imageData = offtx.getImageData(0, 0, width, height).data;
offtx.clearRect(0, 0, width, height);
for(var x = 0; x < this.width; ++x)
{
for(var y = 0; y < this.height; ++y)
{
var i = (y * this.width + x) * 4;
var r = this.imageData[i ];
var g = this.imageData[i+1];
var b = this.imageData[i+2];
var a = this.imageData[i+3];
offtx.fillStyle = "rgba("+r+","+g+","+b+","+(a/255)+")";
offtx.fillRect(0.5 + (x * this.zoomLevel) | 0, 0.5 + (y*this.zoomLevel) | 0, this.zoomLevel, this.zoomLevel);
}
}
this.imageData = offtx.getImageData(0, 0, this.width * this.zoomLevel, this.height * this.zoomLevel);
However, the problem I have with this solution, is that the image looses any transparency information that way. I don't know if this happens somewere in this algorithm, or maybe putImageData that I am using later to display that image is doing this, but I can't seem to be able to preserve transparency.
Each time I do this I create a canvas, I put the image on that canvas and use getImageData to get image from that canvas as you can see in the first lines of the code. Maybe there is no other way, so I might not mind that...
But the problem is I use two for loops to draw resized image and then use getImageData to store that image information. This is a wierd way to do it. I would prefer to create empty image data and fill it with all the original image information only resized. I can't grasp that with my mind, I can't image the loop structure for this. To show what I mean:
for(var x = 0; x < this.width; ++x)
{
for(var y = 0; y < this.height; ++y)
{
var i = (y * this.width + x) * 4;
var r = this.imageData[i ];
var g = this.imageData[i+1];
var b = this.imageData[i+2];
var a = this.imageData[i+3];
//I WOULD LIKE MAGIC TO HAPPEN HERE THAT WILL
//RESIZE THAT CURRENT PIXEL AND MOVE IT TO THE NEW IMAGE DATA RESIZED
//SO EVERYTHING IS DONE NICE AND CLEAN IN THIS LOOP WITHOUT THE
//GETIMAGEDATA LATER AND MAYBE SET TRANSPARENT PIXELS WHILE I'M AT IT
}
}
I can't figure out the MAGIC part.
Thank you for reading!
Why not just use the built-in drawImage combined with image smoothing disabled? Doing this operation in a loop is not only relative slow but also prone to errors (as you already discovered).
Doing it the following way will give you the "pixel art" look and will also preserve the alpha channel:
var factor = 4; /// will resize 4x
offtx.imageSmoothingEnabled = false; /// prefixed in some browsers
offtx.drawImage(imageSource, offsetX, offsetY, width, height,
0, 0, width * factor, height * factor);
Here is an online demo.
Try using this library I recently made which can load an image, resize it fixed width & height or precentage.
It does exactly what you need, and much more like converting canvas to base64, blob, etc...
var CanvaWork = new CanvaWork();
CanvaWork.canvasResizeAll(obj.canvas, function(canvases){
// "canvases" will be an array containing 3 canvases with different sizes depending on initial options
});
https://github.com/vnbenny/canvawork.js
Hope this helps you!

How to check if a specific pixel of an image is transparent?

Is there any way to check if a selected (x,y) point of a PNG image is transparent?
Building on Jeff's answer, your first step would be to create a canvas representation of your PNG. The following creates an off-screen canvas that is the same width and height as your image and has the image drawn on it.
var img = document.getElementById('my-image');
var canvas = document.createElement('canvas');
canvas.width = img.width;
canvas.height = img.height;
canvas.getContext('2d').drawImage(img, 0, 0, img.width, img.height);
After that, when a user clicks, use event.offsetX and event.offsetY to get the position. This can then be used to acquire the pixel:
var pixelData = canvas.getContext('2d').getImageData(event.offsetX, event.offsetY, 1, 1).data;
Because you are only grabbing one pixel, pixelData is a four entry array containing the pixel's R, G, B, and A values. For alpha, anything less than 255 represents some level of transparency with 0 being fully transparent.
Here is a jsFiddle example: http://jsfiddle.net/thirtydot/9SEMf/869/ I used jQuery for convenience in all of this, but it is by no means required.
Note: getImageData falls under the browser's same-origin policy to prevent data leaks, meaning this technique will fail if you dirty the canvas with an image from another domain or (I believe, but some browsers may have solved this) SVG from any domain. This protects against cases where a site serves up a custom image asset for a logged in user and an attacker wants to read the image to get information. You can solve the problem by either serving the image from the same server or implementing Cross-origin resource sharing.
Canvas would be a great way to do this, as #pst said above. Check out this answer for a good example:
getPixel from HTML Canvas?
Some code that would serve you specifically as well:
var imgd = context.getImageData(x, y, width, height);
var pix = imgd.data;
for (var i = 0, n = pix.length; i < n; i += 4) {
console.log pix[i+3]
}
This will go row by row, so you'd need to convert that into an x,y and either convert the for loop to a direct check or run a conditional inside.
Reading your question again, it looks like you want to be able to get the point that the person clicks on. This can be done pretty easily with jquery's click event. Just run the above code inside a click handler as such:
$('el').click(function(e){
console.log(e.clientX, e.clientY)
}
Those should grab your x and y values.
The two previous answers demonstrate how to use Canvas and ImageData. I would like to propose an answer with runnable example and using an image processing framework, so you don't need to handle the pixel data manually.
MarvinJ provides the method image.getAlphaComponent(x,y) which simply returns the transparency value for the pixel in x,y coordinate. If this value is 0, pixel is totally transparent, values between 1 and 254 are transparency levels, finally 255 is opaque.
For demonstrating I've used the image below (300x300) with transparent background and two pixels at coordinates (0,0) and (150,150).
Console output:
(0,0): TRANSPARENT
(150,150): NOT_TRANSPARENT
image = new MarvinImage();
image.load("https://i.imgur.com/eLZVbQG.png", imageLoaded);
function imageLoaded(){
console.log("(0,0): "+(image.getAlphaComponent(0,0) > 0 ? "NOT_TRANSPARENT" : "TRANSPARENT"));
console.log("(150,150): "+(image.getAlphaComponent(150,150) > 0 ? "NOT_TRANSPARENT" : "TRANSPARENT"));
}
<script src="https://www.marvinj.org/releases/marvinj-0.7.js"></script>
Building on Brian Nickel's answer, only the wanted single pixel of the source image is drawn onto a 1*1 pixel canvas, which is more efficient than drawing the entire image just to get a single pixel:
function getPixel(img, x, y) {
let canvas = document.createElement('canvas');
canvas.width = 1;
canvas.height = 1;
canvas.getContext('2d').drawImage(img, x, y, 1, 1, 0, 0, 1, 1);;
let pixelData = canvas.getContext('2d').getImageData(0, 0, 1, 1).data;
return pixelData;
}
With : i << 2
const pixels = context.getImageData(x, y, width, height).data;
for (let i = 0, dx = 0; dx < data.length; i++, dx = i << 2)
{
if (pixels[dx+3] <= 8) { console.log("transparent x= " + i); }
}
Here's a consolidation of a few answers into a runnable snippet that lets you upload a file, hover to preview the RGB value of each pixel, then click to put the RGB in a div.
Pertinent to the original question, the last value (alpha) is the transparency. 0 is fully transparent and 255 is fully opaque.
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
const input = document
.querySelector('input[type="file"]');
input.addEventListener("change", e => {
const image = new Image();
image.addEventListener("load", e => {
const {width, height} = image;
canvas.width = width;
canvas.height = height;
ctx.drawImage(image, 0, 0);
const {data} = ctx.getImageData(
0, 0, width, height
);
const rgb = (x, y) => {
const i = (x + y * width) * 4;
return data.slice(i, i + 4).join(", ");
};
canvas.addEventListener("mousemove", event => {
const {offsetX: x, offsetY: y} = event;
console.log(rgb(x, y));
});
canvas.addEventListener("click", event => {
const {offsetX: x, offsetY: y} = event;
document.querySelector("div")
.textContent = rgb(x, y);
});
});
image.addEventListener("error", () =>
console.error("failed")
);
image.src = URL
.createObjectURL(event.target.files[0]);
});
.as-console-wrapper {
height: 21px !important;
}
<div>
Upload image and mouseover to preview RGB. Click to select a value.
</div>
<form>
<input type="file">
</form>
<canvas></canvas>
References:
HTML5 Canvas - How to get adjacent pixels position from the linearized imagedata Uint8ClampedArray?
How to upload image into HTML5 canvas

Categories

Resources