I am trying to detect some meter reading of an analogue meter. I am currently using the amazon recognition service to extract readings from a meter in a react-native app. The process did not work very well so as part of trying to fix this. I implemented a cropping functionality in the app so we send only relevant part of the image to the service. I run into another problem. The analogue separators on the meter are interspersed such that they are read as ones.
uncroppped meter image
uncroppped meter image
cropped image from the mobile app
cropped image from the mobile app
What I have tried. I created a simple server application to try to remove these lines before we send the image to rekognito
Converted the image to greyscale
Applied Gaussian blur to remove some of the noise.
Applied the [canny algortihm (https://en.wikipedia.org/wiki/Canny_edge_detector) to detect the edges.
using opencv for node
const { img } = req.params; // Mat
const grayWithGaussianBlur = img
.cvtColor(cv.COLOR_BGR2GRAY)
.gaussianBlur(new cv.Size(5, 5), 0, 0, cv.BORDER_DEFAULT)
.canny(30, 150);
The result look like this.
result
The output is as I expect. I have been trying to figure out how to remove the interspersed edges leaving the clearly defined edge.
I filtered the contours only leaving contours that meet specific criteria. Like area greater than a certain threshold,
const contours = grayWithGaussianBlur.copy().findContours(cv.RETR_TREE, cv.CHAIN_APPROX_NONE);
const viable = contours.filter(contour => {
const { width,height } = contour.boundingRect();
return width > 5 && width <= height; // example criteria
});
const newImage = new cv.Mat(grayWithGaussianBlur.rows, grayWithGaussianBlur.cols, 0);
newImage.drawContours(viable, new cv.Vec3(255, 255, 255), -1);
Can't get this to work.
My understanding of image processing concepts are very vague and I am unsure of this is a good way to fix this problem. I also don't know much about what I am doing :).
Sorry, I don't have enough reputations to embed the images directly.
Can anyone help or suggest a better approach to removing the lines. Thanks in advance.
Related
I implemented a CNN that I use on a web application via Tensorflow.js.
I need to preprocess my webcam photos to be accepted by my CNN model. So I want to use OpenCV.js in my .js file but I can't figure out how to simply import this library into my .js file where I turn my canvasElement into a tensor using the tf.browser.fromPixels() function of Tensorflow.js.
The tutorials I see show the use of OpenCV.js in the .html file directly inside a <script>, whereas I would like to use it in my javascript file.
I would especially like to use the method cv.cvtColor(). If not, do you have another solution to convert my canvasElement to grayscale?
The script tag will import OpenCV into the webpage (be sure to load this before you load your code that needs to use it - order matters in HTML). You should then be able to access the OpenCV class / object to call its functions with your canvas data to do your pre processing, and then write that back out and convert to tensor in TF.js land.
If you want to quickly convert canvas to greyscale there are many ways to do this - eg how you average the colours etc will effect the greyscale image you get out.
Here is one method: http://www.vapidspace.com/coding/2012/02/26/converting-images-to-grayscale-using-the-canvas/
Here is the code from that site in case it gets removed:
function grayscale (input,output) {
//Get the context for the loaded image
var inputContext = input.getContext("2d");
//get the image data;
var imageData = inputContext.getImageData(0, 0, input.width, input.height);
//Get the CanvasPixelArray
var data = imageData.data;
//Get length of all pixels in image each pixel made up of 4 elements for each pixel, one for Red, Green, Blue and Alpha
var arraylength = input.width * input.height * 4;
//Go through each pixel from bottom right to top left and alter to its gray equiv
//Common formula for converting to grayscale.
//gray = 0.3*R + 0.59*G + 0.11*B
for (var i=arraylength-1; i>0;i-=4)
{
//R= i-3, G = i-2 and B = i-1
//Get our gray shade using the formula
var gray = 0.3 * data[i-3] + 0.59 * data[i-2] + 0.11 * data[i-1];
//Set our 3 RGB channels to the computed gray.
data[i-3] = gray;
data[i-2] = gray;
data[i-1] = gray;
}
//get the output context
var outputContext = output.getContext("2d");
//Display the output image
outputContext.putImageData(imageData, 0, 0);
}
Notice here how they use a formula to calc gray. Depending on your needs you may want to use different ratios of the RGB mix to get the grayscale image.
Personally I would strongly recommend using vanilla JS here as it's very easy to do and you dont need to include OpenCV just to do grayscale which is a massive overhead to include that file for such a task. If you are using some of the more advanced features of OpenCV too then maybe that is a reason to then use it.
I am trying to filter Sentinel 2 images by percent of cloud cover (say, 20%) and then perform some image arithmetic on the output.
I am trying to do implement what is found here:gis.stackexchange thread (https://gis.stackexchange.com/questions/303344/filter-landsat-images-cloud-cover). Unfortunately, the function ee.Algorithms.Landsat... does not work with Sentinel 2 images, which is required for what I am doing.
My code thus far is below.
var myCollection = ee.ImageCollection('COPERNICUS/S2');
var dataset2 = ee.ImageCollection(
myCollection.filterBounds(point) //use only one image that contains the POI
.filterDate('2015-06-23', '2019-04-25') //filter by date range
);
var ds2_cloudiness = dataset2.map(function(image){
var cloud = ee.Algorithms.Landsat.simpleCloudScore(image).select('cloud');
var cloudiness = cloud.reduceRegion({
reducer: 'median'
});
return image.set(cloudiness);
});
var filteredCollection = ds2_cloudiness.filter(ee.Filter.lt('cloud', 20));
Map.addLayer(filteredCollection, {min: -.2, max:.2}, 'test')
This outputs an error: Landsat.simpleCloudScore: Image is not a Landsat scene or is missing SENSOR_ID metadata. Any nudge in the right direction would be appreciated.
I think there is a simpler approach if you just want to filter using cloud cover percentage. You can do this by filtering based on the image metadata.
var myCollection = ee.ImageCollection('COPERNICUS/S2');
print(myCollection.first())
If you inspect the first image in the Sentinel-2 imageCollection you can actually see its metadata (only for that image). Since, you are working with a homogeneous and well maintained image collection, you can expect the other images to have similar porperties. From here, you can do the following
myCollection = myCollection.filter(ee.Filter.lte('CLOUDY_PIXEL_PERCENTAGE',20));
print(myCollection.first());
This particular code will filter the image collection to find images with cloud cover less than or equal to 20. You can verify this by either once again checking the first image or checking the size of the collection which should have narrowed.
However, if you are looking for a separate algorithm to calculate cloud over an image, you'll probably have to write one for Sentinel (yet).
I'm displaying a live video stream in a HTML5 canvas which works fine.
Now, what I need to do is to check for any "motion" in the camera which is being displayed in the HTML5 Canvas.
During my research, I found out that this can be done by checking the previous frame with the current frame that's being displayed in the canvas.
So I tried this code within a setInterval function:
var c = document.querySelector('.mycanv');
var ctx = c.getContext("2d");
var imageData = ctx.getImageData(0, 0, 200, 200);
var data = imageData.data.length;
console.log(data);
However, when I look in the console, the Number that the variable data outputs is always the same which is 16000! and it wont change even if there is a movemnet infront of the camera.
I'm not entirely sure if I am on a right track.
Could someone please advice on this issue and point me in a right direction please?
Thanks in advance.
Here is a minified and simple example:
https://jsfiddle.net/2648xwgz/
Basically, the code draws a frame of the video in the canvas.
Now, what I need to do is to check if the previous frame/image in the canvas is the same as the current frame/image.
Second EDIT:
Okay, so I've taken all the advice in the comments on board and tried to come up with something that actually doesn't check for all the random pixels in the images as that is heavy and not a good practic...
So, I tried something like this:
https://jsfiddle.net/qpxjcv3a/6/
The above code will run fine in Firefox and with a local Video file. but on JSFIDDLE, you will get a cross origion error.
Anyway, The key point in the code above is using the ctx.globalCompositeOperation = 'difference'; i guess.
And then doing a "scrore" calculation taken from here to detect some changes.
However, when I run my code, I always get: console.log('we have motion'); in the console! Even when the video is paused and there is no more new frames.
So I did console.log(imageScore); and it is always adding up by 10000 even when the video is paused or ended! I'm not sure why that is and whether this calculation is correct at all. But that is where I am at the moment.
Any pointers and help appreciated.
As Daniel said you need to check the pixels, the legnth would be the same for all iterations.
You should look into image hashing algorithms. At every interval you can calculate a hash, and store that in a global variable to compare at the next interval. This would also give you the option to set a threshold, so minor changes would not trigger motion detection.
This page explains image hashing in more details: https://jenssegers.com/61/perceptual-image-hashes
You can start by implementing average hash. It is quite simple. You would reduce your canvas size to 8x8 pixels.
ctx.drawImage(video, 0, 0, video.width, video.height, 0, 0, 8, 8);
var imageData = ctx.getImageData(0, 0, 8, 8);
var data = imageData.data;
Then you iterate over the image data and calculate the brightness.
var brightnessdata = []
for(var i = 0; i<data.length; i+=4){
brightnessdata.push((data[i]+data[i+1]+data[i+2])/3);
}
The rest is simply calculating the average brightness and comparing each pixel brightness to the average brightness to calculate the hash.
I have some manipulation in Google Earth Engine, for example:
// Load a cloudy Landsat scene and display it.
var cloudy_scene = ee.Image('LANDSAT/LC8_L1T_TOA/LC80440342014269LGN00');
Map.centerObject(cloudy_scene);
Map.addLayer(cloudy_scene, {bands: ['B4', 'B3', 'B2'], max: 0.4}, 'TOA', false);
// Add a cloud score band. It is automatically called 'cloud'.
var scored = ee.Algorithms.Landsat.simpleCloudScore(cloudy_scene);
// Create a mask from the cloud score and combine it with the image mask.
var mask = scored.select(['cloud']).lte(20);
// Apply the mask to the image.
var masked = cloudy_scene.updateMask(mask);
And now I want to export result (masked) to google drive using method Export.image.toDrive, but I don't known how to specify parameter region to meet the same as original image LANDSAT/LC8_L1T_TOA/LC80440342014269LGN00 is.
Please help me construct this region.
I think that's what you're looking for:
Export.image.toDrive({
image:masked.select('B3'),
description: 'Masked_Landsat_Image',
region:masked.geometry(),
scale:mask.projection().nominalScale().getInfo()
})
In this case I'm using the image's footprint ( with image.geometry() ) to define my export region.
Note that I'm using the function mask.projection().nominalScale().getInfo() to derive the scale (resolution) of your export. This makes sure I'm using the native resolution of the image (in this case 30m). You need to add getInfo to the function to actually retrieves the integer from the server.
You could also just specify 30 or any other desired resolution in meters.
HTH
Edit:
Just a visual aid to what I've written in the comment below:
3 Images:
Top left corner of original LS image (downloaded from EarthExplorer) - Red
indicates NoData
LS image from GEE on top of original image (GEE image has redish pixels) - You can clearly see that there's still the NoData part of the original image which is missing in the GEE version. The thing I would be concerned about is that the pixels don't line up nicely.
The top right corner of both images: Here you can see how far the 2 images are apart
Leading up from this question Detecting mouse coordinates with precision, I have learnt quite a bit in the past few days. Here are what I picked as best learning resources on this topic:
http://gamedev.tutsplus.com/tutorials/implementation/quick-tip-use-quadtrees-to-detect-likely-collisions-in-2d-space/
http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/quadtrees-r1303
http://jsfiddle.net/2dchA/2/
The code in (3) works in JSFiddle but breaks at this section in my testing environment (VS2012):
var myTree = new Quadtree({
x: 0,
y: 0,
width: 400,
height: 300
});
with the message Quadtree is undefined in IE. FF & Chrome just gloss over it and display an empty page. I couldn't sort it out. Question 1: Can someone help out with that?
My main question:
I have a region (parcels of land like a map) with about 1500 parcels drawn in html5, not jpg or png images. It is a lot of lines of code to complete that but the rendering is great, so I am keeping it that way. I intend to have a mouseover event tell me which parcel I am standing on when the mouse stops. As you will see in the previous question referred my previous attempts were not impressive. Based on the learning I have been doing, and thanks to Ken J's answer/comments, I would like to go with this new approach of slicing up my canvas into say 15 quads of 100 objects each. However, I would like some guidance before I take another wild dive the wrong way.
Question 2: Should I slice it up at creation or should the slicing happen when the mouse is over a region, ie, trail the mouse? The latter sounds better to me but I think I can do with some advice and, if possible, some start out code. The quadtree concept is completely new to me. Thanks.
Can't help with question 1.
You should definitely build the tree as early as possible, given that the objective is to get the page to respond as quick as possible once the user clicks somewhere.
Keep the tree for as long as the user interacts with the 2d area. Updating a quad tree shouldn't be too hard, so even if the area changes contents, you should be able to reuse the existing tree (just update it).
Given the fact that your draw area is well know i see no advantage in a QuadTree over a spacial hash function. This function will give you an integer out of an (x,y) point.
var blocWidth = 20;
var blocHeight = 20;
var blocsPerLine = ( 0 | ( worldWidth / blocWidth) ) + 1 ;
function hashPoint(x,y) {
return ( 0 | (x/blocWidth)) + blocsPerLine*(0|(y/blocHeight));
}
once you built that, hash all your parcels within an array :
parcelHash = [];
function addHash(i,p) {
if (!parcelHash[i]) { parcelHash[i]=[ p ]; return; }
if (parcelHash[i].indexOf(p) != -1 ) return;
parcelHash[i].push(p);
}
function hashParcel (p) {
var thisHash = hashPoint(p.x,p.y); // upper left
addHash( thisHash, p);
thisHash = hashPoint(p.x+width, p.y); // upper right
addHash(thisHash, p);
thisHash = hashPoint(p.x, p.y+p.height); // lower left
addHash(thisHash, p);
thisHash = hashPoint(p.x+width, p.y+p.height); // lower right
addHash(thisHash, p);
};
for (var i=0; i<allParcels.length; i++) { hashParcel(allParcels[i]) };
now if you have a mouse position, you can retrieve all the parcels in the
same block with :
function getParcels(x,y) {
var thisHash = hashPoint(x,y);
return parcelHash[thisHash];
}
I'll just give you few tips in addition to what others have said.
... have a mouseover event tell me which parcel I am standing on ...
From your other messages I conclude that parcels will have irregular shapes. Quadtrees in general work with rectangles, so you'd have to calculate the bounding rectangle around the shape of the parcel and insert that rectangle in the quadtree. Then are when you want to determine whether mouse is over a parcel, you'll query the quadtree which will give you a set of parcels that might be under the mouse, but you'll have to then do a more precise check on your own to see if it indeed is.
... when the mouse stops.
From your other questions I saw that you try to detect when the mouse has "stopped". Maybe you should look at it this way: mouse cursor is never moving, it's teleporting around the screen from previous point to next. It's always stopped, never moving. This might seem a bit philosophical, but it'll keep your code simpler. You should definitely be able to achieve what you intended without any setTimeout checks.
... slicing up my canvas into say 15 quads of 100 objects each.
... Should I slice it up at creation or should the slicing happen when the mouse is over a region
You won't (and can't) do slicing, quadtree implementation does that automatically (that's its purpose) when you insert or remove items from it (note that moving the item is actually removing then re-inserting it).
I didn't look into the implementation of quadtree that you're using, but here are two MX-CIF quadtree implementations in case that one doesn't work out for you:
https://github.com/pdehn/jsQuad
https://github.com/bjornharrtell/jsts/tree/master/src/jsts/index/quadtree
The problem in question 1 probably happens because jsfiddle (http) page is trying access quadtree.js which is on https