After searching the web for just over an hour, I have not found any luck.
I am wondering if it is possible, if so how do I create a transparent image on a js canvas not to be considered as a rectangle, rather only the visible area of it.
For example if you click in a transparent spot on the png the script does not considered that part of the object.
Thank you :)
Yes, you can get info about every pixel on the canvas using context.getImageData
A Demo: http://jsfiddle.net/m1erickson/tMmzc/
This code will get an array containing info about every pixel on the canvas:
var data=ctx.getImageData(0,0,canvas.width,canvas.height).data;
The data array is organized with 4 sequential element representing the red,green,blue & alpha(opacity) information about one pixel.
The data array's elements #0-3 have the top-left pixel's r,g,b,a info.
The data array's elements #4-7 have the next rightward pixel's r,g,b,a info.
...and so on...
Therefore, given the mouse position on the canvas you can fetch that pixel's alpha info. If the alpha value is zero then that pixel is transparent.
This code will read the alpha value under the mouse and determine if it's transparent:
var isTransparent = data[(mouseY*canvas.width+mouseX)*4+3]>0;
Related
I am creating rotating planets by projecting a 200x100 bitmap on a sphere. Since this projection
is costly for an animation, I hacked an array with value-pairs ["pixel address in sphere projection image", "what pixels of original bitmap goes to that address"].
To just transfer them quickly with no math.
I end up with 16008 values, which represent the 8004 pixels we need to draw a circle (representing a sphere), on
a 100x100 canvas - planets always have radius 50 and I scale them later.
Now, to rotate a planet, all I have to do is to "shift" the second item of the pair by 1 pixel for a very slow rotation, and
higher values for a the illusion of faster rotations. I end up with this bottleneck:
for (var i=0;i<16008;i+=2)
{
//the planet object has a canvas holding its texture, and
//the texturePixels array is the data of that texture after
//a getImageData() of that texture canvas. finalPixels is
//the pixel data for the final planet on screen projected image.
locationInTexture=valuePairs[i]+planet.angle;//angle increases by 4 cuz each pixel is 4 rgba values
locationInProjection=valuePairs[i+1];
finalPixels[locationInProjection]=planet.texturePixels[locationInTexture]
finalPixels[locationInProjection+1]=planet.texturePixels[locationInTexture+1]
finalPixels[locationInProjection+2]=planet.texturePixels[locationInTexture+2]
finalPixels[locationInProjection+3]=255; //alpha isnt relevant
}
I also made the variables global to accelerate things. But it is still slow. My problem might
be that I should minimize DOM access, but I am accessing the pixel data in 2 canvases thousands
of times, and my guess is these don't behave just like normal 'arrays', though I might
be wrong in this case of simple array items reading/writing. The alternative seems to be to do this:
1-at load, get texture pixel data in a normal array instead of the one inherited from the canvas (is this relevant?)
2-get final pixel arrangement to a new normal array, instead of to the final planet display canvas directly.
3-create an image from that last array data and drawImage on the planet display canvas, and done, assuming
the (create image+draw it on final canvas) would be faster.
Or maybe we can even create a canvas with the sphere directly from the normal array data? Or using images
would be cheaper than canvases? How do I do this? Help please.
Thanks in advance :)
P.S. A screen can have tens of planets+moons when showing a solar system. I decided to ask before embarking
on an "img instead of canvas" approach, and find later that that is not the problem.
I have a fabricjs canvas with multiple SVG objects which may or may not overlap.
Some of the SVG objects have transparent areas, such that the transparent area of Object A may be positioned on top of a filled area of Object B, as in the diagram below:
In the above diagram, the black border illustrates the SVG Object's bounding box.
Points X & Y illustrate cursor locations during object selection.
I am interested in retrieving the RGBA value at the cursor position, such that the selected object is taken into account.
If the user clicks at either Point X or Point Y, then the selected object (with normal behaviour of fabricjs) is Object A, as Object A is the topmost object and the user has clicked within the bounding box.
What I want to do is retrieve the RGBA value (or just the alpha value will do) of Object A at the cursor position i.e. at both Point X and Point Y, Object A is transparent.
From what I can gather from the fabricjs documentation - I can only see a way to get the RGBA value at a cursor position from the canvas as a whole - not for specific objects. This means that Point X is a transparent value, whilst Point Y is blue with full opacity.
var pixelData = this.canvas.getContext().getImageData(pointer.x, pointer.y, 1, 1).data;
What I'm looking for is a way to query the pixel data for a single object at a specific cursor location, perhaps something like:
var pixelData = selectedObject.getImageData(pointer.x, pointer.y, 1, 1).data;
I understand that fabricjs may not directly support such a feature, but I'm wondering whether anybody has a nice way of achieving this so that I can accurately determine when the user has clicked on a transparent area of an object.
I'm supposing that as I'm using SVG images, the solution may have something to do with figuring out the cursor position in relation to the SVG paths and determining whether the user has clicked on a filled section of the SVG, but I'm just a bit stuck with working out the best way to tackle the problem so I'm really open to any suggestions!
If anybody has any pointers it'd be much appreciated!
I have a super simple need of OCR.
My application allows creating image from text. Its very simple. People choose a font face, bold or not, and size.
So they get outputs like this, ignoring the border:
I wanted to create a very simple OCR to read these. I thought of this approach:
In the same way I generate an image for the message. I should generate an image for each character. Then I go through and try to match each character image to the black occourances in the canvas. Is this right approach?
The method I use to draw element to image is this copy paste example here: MDN :: Drawing DOM objects into a canvas
Ok, another couple of tries...
Another method that's simpler than OCR: use Steganography to embed the text message as part of the image itself. Here's a script that uses the alpha channel of an image to store text: http://www.peter-eigenschink.at/projects/steganographyjs/index.html
You can try this "home brewed" OCR solution...but I have doubts about its effectiveness.
Use the clipping form of context.drawImage to draw just the message-text area of your image on the canvas.
Use context.getImageData to grab the pixel information.
Examine each vertical column starting from the left until you find an opaque pixel (this is the left side of the first letter).
Continue examining each vertical column until you find a column with all transparent pixels (this is the right side of the first letter).
Resize a second canvas to exactly contain the discovered letter and drawImage just the first letter to a second canvas.
Set globalCompositeOperation='destination-out' so that any new drawing will erase any existing drawings where the new & old overlap.
fillText the letter "A" on the second canvas.
Use context.getImageData to grab the pixel information on the second canvas.
Count the opaque pixels on the second canvas.
If the opaque pixel count is high, they you probably haven't matched the letter A, so repeat steps 5-9 with the letter B.
If the opaque pixel count is low, then you may have found the letter A.
If the opaque pixel count is medium-low, you may have found the letter A but the 2 A's are not quite aligned. Repeat steps 5-9 but offset the A in step#7 by 1 pixel horizontally or vertically. Continue offsetting the A in 1 pixel offsets and see if the opaque pixel count becomes low.
If step#12 doesn't produce a low pixel count, continue with the letter B,C,etc and repeat steps 5-9.
When you're done discovering the first letter, go back to step#1 and only draw the message-text with an offset that excludes the first letter.
OCR is always complex and often inaccurate.
I hate to wave you off of a solution, but don't use OCR for your purpose
Simple and effective solution...
Put your message in the image's file name.
Solution found - GOCR.js - https://github.com/antimatter15/gocr.js/tree/d820e0651cf819e9649a837d83125724a2c1cc37
download gocr.js
decide if you want to go from WebWorker, or mainthread
worker
In the worker put this code:
importScripts(gocr.js)
GOCR(aImgData)
where aImgData is, take an image, load it, draw it to canvas, then send the data to the webworker. (see mainthread method)
mainthread
<script src="gocr.js">
<script>
var img = new Image()
img.onerror = function() {
console.error('failed')
}
img.onload = function() {
var can = document.createElementNS('http://www.w3.org/1999/xhtml', 'canvas');
can.width = img.width;
can.height = img.height;
var ctx = can.getContext('2d')
ctx.drawImage(img, 0, 0)
// to use this in a worker, do ctx.getImageData(0, 0, img.width, img.height), then transfer the image data to the WebWorker
var text = GOCR(can);
}
</script>
I want to know whether a point is in the "black" area of some image like the one below.
For the time being I created a large array like this (generated outside JavaScript):
area = [ [0,200], [0,201], [0,202], ..., [1,199], [1,200], ...]
to indicate which coordinates are black. Since this is getting very memory heavy for larger areas (I'm talking of image sizes of about 2000x2000 pixels), which kind of algorithm would you choose that is fast and not too memory hungry for finding out whether a specific coordinate is inside the black area?
You can draw the image to a canvas with same width and height as the image and then retrieve the pixelColor from the canvas at the specific point(x|y).
Here is a thread on how to retrieve the pixcel color:
Get pixel color from an image
This is how i retrieve the pixel color from the mouseposition and return a colorcode('#rrggbb'):
var pixelData = canvas.getContext('2d').getImageData(event.offsetX, event.offsetY, 1, 1).data;
var hex= '#' + valToHex(pixelData[0]) + valToHex(pixelData[1]) + valToHex(pixelData[2]);
Let me explain what I want to achieve..
I'm building something just for fun and as a learning experiment similar to
www.milliondollarhomepage.com
Except, I want to use canvas and/or Fabric.js to achieve the same thing. How would I go about manipulating each pixel in a canvas?
Scenario:
I want to offer a 1000 pixels up for grabs.
User can choose an image and where it should go on the canvas
Image can be resized according to how many pixels the user wants and deducted from the overall remaining pixels.
Any help on this would be appreciated!
The method in the HTML5 canvas api for manipulating individual pixels is
context.getImageData(x,y,width,height);
for example
var map = context.getImageData(0,0,canvas.width,canvas.height);
This returns a massive array that contains in repeating order:
[red,green,blue,alpha,red,green,blue,alpha...]
Every 4 numbers represent the red, green, blue, and alpha channels for every single pixel on the chosen area, left-to-right, top-to-bottom.
The values for each of these numbers is an integer ranging from 0 - 255.
To loop through every pixel and drop their red and blue channels, thus turning the image green, for example
//assume map variable from earlier
for(var i = 0; i < map.data.length; i+=4){
map.data[i] = 0; // drop red to 0
map.data[i+2] = 0; // drop blue to 0
}
context.putImageData(map,0,0);
See Example
Note that this procedure can only be done on a server, and without images from other domains "contaminating" the canvas. If these requirements are not met, a security error DOM exception will be thrown.