Read Image Text (Vertical&Horizontal) Using OCR Tesseract - javascript

This live working demo shows an OCR Reader. However, this tesseract OCR Reader only reads horizontal text properly and not vertically. Thus, my solution to read the all numbers (12,32,98,36) would to rotate by 90 degrees and read and store all horizontal text read. For example, by default it would read (12,32) horizontal numbers, and when I rotate image by 90 degrees, it would read (36),and rotate it by 90 degrees, it would read nothing, and finally last 90 degrees rotation would read (98). Is this the crude approach I must take and can anyone show a codepen demo which can read all the text and display it? Any help is greatly appreciated :)
const worker = Tesseract.create();
const defaultImage = 'https://i.ibb.co/jLhP6d2/all-deides-num.jpg';
const ocr = document.querySelector('#ocr');
const input = ocr.querySelector('#ocr__input');
const img = ocr.querySelector('#ocr__img');
const output = ocr.querySelector('#ocr__output');
const form = ocr.querySelector('#ocr__form');

Related

How to draw tensorflow js results to canvas? Problems with tf.browser.toPixels - shows old results only

I am new to JS. I have a ML model which does image segmentation. I provide the model with one image, to which it predicts output. I get the correct output array, I've verified.
But when I try to draw this output to a canvas using tf.browser.toPixel I always get the old Image.
Meaning, If I provide input as:
Input : img1 -> img2 -> img3...
output: <random shape> -> img1_prediction -> img2_prediction...
Where the random shape is this always, no matter what the input is.
I've checked, this is not the output of my model.
Here's the code for this:
async function detect_custom(imgTag, canvas) {
let tensor = tf.browser.fromPixels(imgTag).toFloat(); //imgTag is img element in html (input image)
tensor = tensor.expandDims(0);
const res = model.predict(tensor).squeeze();
tf.dispose(tensor);
// Everything is working correctly up to this point.
await tf.browser.toPixels(res, canvas); // Problematic (I think)
}
I found the await tf.browser.toPixels(res, canvas) from this answer. Can anyone help what am i doing wrong?
I've checked, this is not the output of my model.
What is the output of the model?
And are you sure it takes 0..255 as input range? if input is float32, its likely expecting values in range 0..1 - and it does seems like output is blown up.
try tensor = tensor.div(255.0) before inference to normalize to 0..1
or tensor = tensor.div(127.5).sub(1) to normalize to -1..1

Opencv remove unwanted part of an image

I am trying to detect some meter reading of an analogue meter. I am currently using the amazon recognition service to extract readings from a meter in a react-native app. The process did not work very well so as part of trying to fix this. I implemented a cropping functionality in the app so we send only relevant part of the image to the service. I run into another problem. The analogue separators on the meter are interspersed such that they are read as ones.
uncroppped meter image
uncroppped meter image
cropped image from the mobile app
cropped image from the mobile app
What I have tried. I created a simple server application to try to remove these lines before we send the image to rekognito
Converted the image to greyscale
Applied Gaussian blur to remove some of the noise.
Applied the [canny algortihm (https://en.wikipedia.org/wiki/Canny_edge_detector) to detect the edges.
using opencv for node
const { img } = req.params; // Mat
const grayWithGaussianBlur = img
.cvtColor(cv.COLOR_BGR2GRAY)
.gaussianBlur(new cv.Size(5, 5), 0, 0, cv.BORDER_DEFAULT)
.canny(30, 150);
The result look like this.
result
The output is as I expect. I have been trying to figure out how to remove the interspersed edges leaving the clearly defined edge.
I filtered the contours only leaving contours that meet specific criteria. Like area greater than a certain threshold,
const contours = grayWithGaussianBlur.copy().findContours(cv.RETR_TREE, cv.CHAIN_APPROX_NONE);
const viable = contours.filter(contour => {
const { width,height } = contour.boundingRect();
return width > 5 && width <= height; // example criteria
});
const newImage = new cv.Mat(grayWithGaussianBlur.rows, grayWithGaussianBlur.cols, 0);
newImage.drawContours(viable, new cv.Vec3(255, 255, 255), -1);
Can't get this to work.
My understanding of image processing concepts are very vague and I am unsure of this is a good way to fix this problem. I also don't know much about what I am doing :).
Sorry, I don't have enough reputations to embed the images directly.
Can anyone help or suggest a better approach to removing the lines. Thanks in advance.

finding absolute straight lines/ rectangles for contours in opencv

While trying to detect the straight lines in an image, I'm unable to find the Hough transform of the image.
I have first converted the image to grayscale but unable to find the straight lines.
Thus, can you help me to make the contours which were detected as pure straight lines/rectangles. Such that I can easily find the horizontal lines(i will do it later) in the image.
def hou():
import math
img = cv2.imread('image5GR.jpg')
#gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(img,50,150,apertureSize = 3)
minLineLength = 20
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,math.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2 in lines[0]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
while True:
cv2.imshow('houghlines',img)
if cv2.waitKey(20) & 0xFF == 27:
break
cv2.destroyAllWindows()
import cv2
hou()
Finally, the image I am getting, which isn't appropriate(only shows a small green line at the bottom)
whereas, I am expecting it to be as:
Thanks

Google Earth Engine: Region of Landsat Image

I have some manipulation in Google Earth Engine, for example:
// Load a cloudy Landsat scene and display it.
var cloudy_scene = ee.Image('LANDSAT/LC8_L1T_TOA/LC80440342014269LGN00');
Map.centerObject(cloudy_scene);
Map.addLayer(cloudy_scene, {bands: ['B4', 'B3', 'B2'], max: 0.4}, 'TOA', false);
// Add a cloud score band. It is automatically called 'cloud'.
var scored = ee.Algorithms.Landsat.simpleCloudScore(cloudy_scene);
// Create a mask from the cloud score and combine it with the image mask.
var mask = scored.select(['cloud']).lte(20);
// Apply the mask to the image.
var masked = cloudy_scene.updateMask(mask);
And now I want to export result (masked) to google drive using method Export.image.toDrive, but I don't known how to specify parameter region to meet the same as original image LANDSAT/LC8_L1T_TOA/LC80440342014269LGN00 is.
Please help me construct this region.
I think that's what you're looking for:
Export.image.toDrive({
image:masked.select('B3'),
description: 'Masked_Landsat_Image',
region:masked.geometry(),
scale:mask.projection().nominalScale().getInfo()
})
In this case I'm using the image's footprint ( with image.geometry() ) to define my export region.
Note that I'm using the function mask.projection().nominalScale().getInfo() to derive the scale (resolution) of your export. This makes sure I'm using the native resolution of the image (in this case 30m). You need to add getInfo to the function to actually retrieves the integer from the server.
You could also just specify 30 or any other desired resolution in meters.
HTH
Edit:
Just a visual aid to what I've written in the comment below:
3 Images:
Top left corner of original LS image (downloaded from EarthExplorer) - Red
indicates NoData
LS image from GEE on top of original image (GEE image has redish pixels) - You can clearly see that there's still the NoData part of the original image which is missing in the GEE version. The thing I would be concerned about is that the pixels don't line up nicely.
The top right corner of both images: Here you can see how far the 2 images are apart

Save canvas to image using C# and jQuery

I'm trying to correctly save part of image which is highlighted with jcrop to a circle image.
I have canvas element which previews the selected area and how the image will look like, please check the screenshot below:
I also have hidden field which saves the value (example: "data:image/png;base64") which is displayed in the canvas.
I'm able to save image from the hidden field value with this code:
if (hfImageData.Value != string.Empty)
{
string value = hfImageData.Value;
if (value.Contains("jpeg"))
{
value = value.Replace("data:image/jpeg;base64,", "");
}
else if(value.Contains("png"))
{
value = value.Replace("data:image/png;base64,", "");
}
string path = Server.MapPath("/cropimages/");
string fileNameWitPath = path + DateTime.Now.ToString().Replace("/", "-").Replace(" ", "- ").Replace(":", "") + ".png";
using (FileStream fs = new FileStream(fileNameWitPath, FileMode.Create))
{
using (BinaryWriter bw = new BinaryWriter(fs))
{
byte[] data = Convert.FromBase64String(value);
bw.Write(data);
bw.Close();
}
}
}
This is the end result of that code:
What I really want to save is image in circle format as it is highlighted in the jcrop selection with jQuery/C#.
What do I need to modify in the existing code to make the image crop work as expected?
In general computer images are always stored as rectangular blocks of data. A "non-rectangular" images is a rectangular image with a non-rectangular mask or opacity "alpha" layer associated with it.
Per the jcrop online docs, jcrop doesn't do non-rectangular cropping-
Cropping Irregular Selections
If you actually want to crop a circle or an ellipse, you're on your
own. Jcrop will provide the rectangular coordinates for these crops,
and further processing can be done to extract the circle or ellipse
from the image.
If you're aiming to do the image manipulation on the client, then you would need to be working in an image format that supports an alpha channel (probably 32 bit: 8 bits for RGB and Alpha). You would need to apply the mask to the alpha channel in a canvas element. I think alpha support is fairly recent stuff in HTML5 so browser support is probably patchy.
You would then need to communicate that back to the host in a file format that supports alpha. JPEG doesn't, PNG (in 32 bit per pixel format) does.
Alternatively, if your server-side code "knows" the shape of the selection crop mask, you can ship the full (rectangular) image back to the server and have your server-side code apply the correct mask shape, using something like GD in PHP.

Categories

Resources