While trying to detect the straight lines in an image, I'm unable to find the Hough transform of the image.
I have first converted the image to grayscale but unable to find the straight lines.
Thus, can you help me to make the contours which were detected as pure straight lines/rectangles. Such that I can easily find the horizontal lines(i will do it later) in the image.
def hou():
import math
img = cv2.imread('image5GR.jpg')
#gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(img,50,150,apertureSize = 3)
minLineLength = 20
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,math.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2 in lines[0]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
while True:
cv2.imshow('houghlines',img)
if cv2.waitKey(20) & 0xFF == 27:
break
cv2.destroyAllWindows()
import cv2
hou()
Finally, the image I am getting, which isn't appropriate(only shows a small green line at the bottom)
whereas, I am expecting it to be as:
Thanks
Related
This live working demo shows an OCR Reader. However, this tesseract OCR Reader only reads horizontal text properly and not vertically. Thus, my solution to read the all numbers (12,32,98,36) would to rotate by 90 degrees and read and store all horizontal text read. For example, by default it would read (12,32) horizontal numbers, and when I rotate image by 90 degrees, it would read (36),and rotate it by 90 degrees, it would read nothing, and finally last 90 degrees rotation would read (98). Is this the crude approach I must take and can anyone show a codepen demo which can read all the text and display it? Any help is greatly appreciated :)
const worker = Tesseract.create();
const defaultImage = 'https://i.ibb.co/jLhP6d2/all-deides-num.jpg';
const ocr = document.querySelector('#ocr');
const input = ocr.querySelector('#ocr__input');
const img = ocr.querySelector('#ocr__img');
const output = ocr.querySelector('#ocr__output');
const form = ocr.querySelector('#ocr__form');
I am trying to detect some meter reading of an analogue meter. I am currently using the amazon recognition service to extract readings from a meter in a react-native app. The process did not work very well so as part of trying to fix this. I implemented a cropping functionality in the app so we send only relevant part of the image to the service. I run into another problem. The analogue separators on the meter are interspersed such that they are read as ones.
uncroppped meter image
uncroppped meter image
cropped image from the mobile app
cropped image from the mobile app
What I have tried. I created a simple server application to try to remove these lines before we send the image to rekognito
Converted the image to greyscale
Applied Gaussian blur to remove some of the noise.
Applied the [canny algortihm (https://en.wikipedia.org/wiki/Canny_edge_detector) to detect the edges.
using opencv for node
const { img } = req.params; // Mat
const grayWithGaussianBlur = img
.cvtColor(cv.COLOR_BGR2GRAY)
.gaussianBlur(new cv.Size(5, 5), 0, 0, cv.BORDER_DEFAULT)
.canny(30, 150);
The result look like this.
result
The output is as I expect. I have been trying to figure out how to remove the interspersed edges leaving the clearly defined edge.
I filtered the contours only leaving contours that meet specific criteria. Like area greater than a certain threshold,
const contours = grayWithGaussianBlur.copy().findContours(cv.RETR_TREE, cv.CHAIN_APPROX_NONE);
const viable = contours.filter(contour => {
const { width,height } = contour.boundingRect();
return width > 5 && width <= height; // example criteria
});
const newImage = new cv.Mat(grayWithGaussianBlur.rows, grayWithGaussianBlur.cols, 0);
newImage.drawContours(viable, new cv.Vec3(255, 255, 255), -1);
Can't get this to work.
My understanding of image processing concepts are very vague and I am unsure of this is a good way to fix this problem. I also don't know much about what I am doing :).
Sorry, I don't have enough reputations to embed the images directly.
Can anyone help or suggest a better approach to removing the lines. Thanks in advance.
I have some manipulation in Google Earth Engine, for example:
// Load a cloudy Landsat scene and display it.
var cloudy_scene = ee.Image('LANDSAT/LC8_L1T_TOA/LC80440342014269LGN00');
Map.centerObject(cloudy_scene);
Map.addLayer(cloudy_scene, {bands: ['B4', 'B3', 'B2'], max: 0.4}, 'TOA', false);
// Add a cloud score band. It is automatically called 'cloud'.
var scored = ee.Algorithms.Landsat.simpleCloudScore(cloudy_scene);
// Create a mask from the cloud score and combine it with the image mask.
var mask = scored.select(['cloud']).lte(20);
// Apply the mask to the image.
var masked = cloudy_scene.updateMask(mask);
And now I want to export result (masked) to google drive using method Export.image.toDrive, but I don't known how to specify parameter region to meet the same as original image LANDSAT/LC8_L1T_TOA/LC80440342014269LGN00 is.
Please help me construct this region.
I think that's what you're looking for:
Export.image.toDrive({
image:masked.select('B3'),
description: 'Masked_Landsat_Image',
region:masked.geometry(),
scale:mask.projection().nominalScale().getInfo()
})
In this case I'm using the image's footprint ( with image.geometry() ) to define my export region.
Note that I'm using the function mask.projection().nominalScale().getInfo() to derive the scale (resolution) of your export. This makes sure I'm using the native resolution of the image (in this case 30m). You need to add getInfo to the function to actually retrieves the integer from the server.
You could also just specify 30 or any other desired resolution in meters.
HTH
Edit:
Just a visual aid to what I've written in the comment below:
3 Images:
Top left corner of original LS image (downloaded from EarthExplorer) - Red
indicates NoData
LS image from GEE on top of original image (GEE image has redish pixels) - You can clearly see that there's still the NoData part of the original image which is missing in the GEE version. The thing I would be concerned about is that the pixels don't line up nicely.
The top right corner of both images: Here you can see how far the 2 images are apart
I am plotting two lines on a graph in Matlab, and converting it to plot.ly using the Matlab library. When I use the 'strip' = false json property, it preserves the Matlab layout. However, it removes the nice feature where by you get all the data when you hover over one line. When 'strip' = false, you only get data pertaining to the line you hover over.
Does anyone know how to use 'strip' = false and yet retain all the hover overs?
Sample code in Matlab:
X = linspace(0,2*pi,50)';
Y = [cos(X), 0.5*sin(X)];
figure
plot(X,Y)
Then generate two plot.ly plots:
fig2plotly(gcf, 'strip', 0);
fig2plotly(gcf, 'strip', 1);
These can be respectively found at:
https://plot.ly/~alexdp/0
https://plot.ly/~alexdp/2
Note the difference in the hover over behaviour.
When you convert a matlab figure to Plotly Figure with strip=false, the hovermode attribute is set to closest by default, hence it only shows data pertaining to nearest curve on hovering. To override this behaviour:
X = linspace(0,2*pi,50);
Y = [cos(X), 0.5*sin(X)];
figure
plot(X,Y)
% Convert the chart..
plotly_fig = fig2plotly(gcf, 'strip', 0)
% Set hovermode to blank (basically disable the attribute)
plotly_fig.layout.hovermode=''
% Send the updated figure to plotly:
resp = plotly(plotly_fig)
url = resp.url
I am working on a plugin to allow "natural looking" signatures to be drawn using mouse or touch. When confirmed by the user, the result will be a stored SVG that can then be displayed in place of the "Click to sign" button.
The attached JSFiddle http://jsfiddle.net/TrueBlueAussie/67haj4nt/3/ shows a testbed for what I am trying to do. The SVG generated image should look close to the original canvas paths.
The first div contains a canvas, in which I draw some multiple-segment lines (e.g. paths). Using quadraticCurveTo, and a midpoint for the control point, I draw the lines with smooth curves. This works just fine.
The key part of the curved line drawing is:
$.each(lines, function () {
if (this.length > 0) {
var lastPoint = this[0];
ctx.moveTo(lastPoint[0], lastPoint[1]);
for (var i = 1; i < this.length; i++) {
var point = this[i];
var midPoint = [(lastPoint[0] + point[0]) / 2, (lastPoint[1] + point[1]) / 2];
ctx.quadraticCurveTo(lastPoint[0], lastPoint[1], midPoint[0], midPoint[1]);
lastPoint = point;
}
// Draw the last line straight
ctx.lineTo(lastPoint[0], lastPoint[1]);
}
});
I have tried multiple options for SVG generation of the same output, but I am stumped on how to convert the same sets of points to equivalent curved lines. Quadratic Beziers require "proper" control points, but I would prefer to use the far simpler mid-points if possible.
Any ideas? Is this possible or will I have to convert both to use Beziers with calculated control point(s). Is there a simple way to calculate control points that will do the same job?
jQuery or raw JavaScript solutions are fine, but you need to demonstrate in the JSFiddle provided :)
It's just a bug in your code. You are not updating lastPoint in your SVG version.
http://jsfiddle.net/67haj4nt/4/
And if you update the SVG version to match the canvas version, you get identical curves.
http://jsfiddle.net/67haj4nt/5/