tensorflow js RGB to Lab from input img - javascript

I have been searching and debugging but I can not find anything that works for me. I'm doing a web application in which I try to go from black and white images to color and for that I have put an input in which I load the image and make the inference (currently with an image-to-image model).
The fact is that I want to transform the image of rgb to lab as a preprocess before it enters the network because that is how I intend to train it. My code is as follows:
var myInput = document.getElementById('myFileInput');
function processPic() {
if (myInput.files && myInput.files[0]) {
var reader = new FileReader();
reader.onload = function (e) {
$('#prev_img_id').attr('src', e.target.result);
//Initiate the JavaScript Image object.
var image = new Image();
//Set the Base64 string return from FileReader as source.
image.src = e.target.result;
image.onload = function () {
//alert(this.height)
const webcamImage = tf.fromPixels(this);
const batchedImage = webcamImage.expandDims(0);
predict(batchedImage.toFloat().div(tf.scalar(127)).sub(tf.scalar(1)))
}
}
reader.readAsDataURL(myInput.files[0]);
}
}
myInput.addEventListener('change', processPic, false);
function predict(the_img) {
//get predictions
let pred = mobilenet.predict(the_img);
//retreive the highest probability class label
let cls = pred.argMax().buffer().values[0];
alert(IMAGENET_CLASSES[cls]);
}

I wrote a some code to do this using the tensorflow.js operations the code can be optimized further with matrix multiplications but it will work and should put you on the right path if this is still relevant.
RGB2LAB TFJS

There are some resources around regarding the conversion RGB to LAB e.g. http://www.easyrgb.com/en/math.php.
You could also give this JS implementation a try (https://github.com/antimatter15/rgb-lab, which is actually using the equations from the easyrgb website), calling the rgb2lab() function inside your image.onload.
To access the image data required by ``, you can have a look at this SO thread (How do I access/change pixels in a javascript image object?) i.e. using an intermediary canvas.

Related

I want to load images withen the folder and want to get all images in base64 in my js for future use

i am facing the issue i always get the last image in my image array due to kind of Filereader library function onloadend.
how can i get base64 for all images in my folder.
<input id="file-input" multiple webkitdirectory type="file" />
var input = document.getElementById('file-input');
var file_names = "";
var entries_length = 0;
var entries_count = 0;
var image = new Array();
var obj = {};
var j = 0;
input.onchange = function(e) {
var files = e.target.files; // FileList
entries_length = files.length;
console.log(files);
for (var i = 0, f; f = files[i]; ++i){
console.log("i:"+i);
entries_count = entries_count + 1;
//console.debug(files[i].webkitRelativePath);
if(files[i].type=="image/jpeg")
{
var string = files[i].webkitRelativePath;
var name = string.split("/")[3]; //this is because my image in 3rd dir in the folder
var reader = new FileReader();
reader.onloadend = function() {
obj.name = string.split("/")[3];
obj.image = reader.result;
image[j] = obj;
j = j+1;
}
reader.readAsDataURL(files[i]);
}
}
console.log(image);
}
The issue is caused by the asynchronous loading of files. You iterate over the array and set the onloadend handler for the reader each time, then start loading by calling readAsDataURL.
One problem is that by the time your first image loads, it is possible the for loop has completed, and i is already at the last index of the array.
At this point, obtaining the path from files[i].webkitRelativePath will give you the last filename, and not the one you are expecting.
Check the example for readAsDataURL on MDN to see one possible solution - each load is performed in a separate function, which preserves its scope, along with file.name. Do not be put off by the construction they are using: [].forEach.call(files, readAndPreview). This is a way to map over the files, which are a FileList and not a regular array (so the list does not have a forEach method of its own).
So, it should be sufficient to wrap the loading logic in a function which takes the file object as a parameter:
var images = [];
function loadFile(f) {
var reader = new FileReader();
reader.onloadend = function () {
images.push({
name : f.name, // use whatever naming magic you prefer here
image : reader.result
});
};
reader.readAsDataURL(f);
}
for (var i=0; i<files.length; i++) {
loadFile(files[i]);
}
Each call of the function 'remembers' the file object it was called with, and prevents the filename from getting messed up. If you are interested, read up on closures.
This also has the nice effect of isolating your reader objects, because I have a sneaking suspicion that, although you create a new 'local' reader each iteration, javascript scoping rules are weird and readers could also be interfering with each other (what happens if one reader is loading, but in the same scope you create a new reader with the same variable name? Not sure).
Now, you do not know how long it would take for all images to be loaded, so if you want to take an action right after that, you would have to perform a check each time an onloadend gets called. This is the essence of asynchronous behavior.
As an aside, I should note that it is pointless to manually keep track of the last index of images, which is j. You should just use images.push({ name: "bla", image: "base64..." }). Keeping indices manually opens up possibilities for bugs.

Saving current work from multiple canvas elements

Here is my problem: I have created an image collage function in javascript. (I started off with some code from this post btw: dragging and resizing an image on html5 canvas)
I have 10 canvas elements stacked on top of each other and all parameters, including 2dcontext, image data, positions etc. for each canvas is held in instances of the function 'collage'.
This is working fine, I can manipulate each canvas separately (drag, resize, adding frames, etc). But now and I want the user to be able to save the current work.
So I figure that maybe it would be possible to create a blob, that contains all the object instances, and then save the blob as a file on disk.
This is the function collage (I also push each instance to the array collage.instances, to be able to have numbered indexes)
function collage() {
this.canvas_board = '';
this.canvas = '';
this.ctx = '';
this.canvasOffset = '';
this.offsetX = '';
this.offsetY = '';
this.startX = '';
this.startY = '';
this.imageX = '';
this.imageY = '';
this.mouseX = '';
this.mouseY = '';
this.imageWidth = '';
this.imageHeight = '';
this.imageRight = '';
this.imageBottom = '';
this.imgframe = '';
this.frame = 'noframe';
this.img = '';
collage.instances.push(this);
}
collage.instances = [];
I tried with something like this:
var oMyBlob = new Blob(collage.instances, {type: 'multipart/form-data'});
But that doesn't work (only contains about 300 bits of data).
Anyone who can help? Or maybe suggest an alternative way to save the current collage work. It must of course must be possible to open the blob and repopulate the object instances.
Or maybe I am making this a bit more complicated than it has to be... but I am stuck right now, so I would appreciate any hints.
You can extract each layer's image data to DataURLs and save the result as a json object.
Here's a quick demo: http://codepen.io/gunderson/pen/PqWZwW
The process literally takes each canvas and saves out its data for later import.
The use of jquery here is for convenience:
$(".save-button").click(function() {
var imgData = JSON.stringify({
layers: getLayerData()
});
save(imgData, "myfile.json");
});
function save(filecontents, filename) {
try {
var $a = $("<a>").attr({
href: "data:application/json;," + filecontents,
download: filename
})[0].click();
return filecontents;
} catch (err) {
console.error(err);
return null;
}
}
function getLayerData() {
var imgData = [];
$(".layer").each(function(i, el) {
imgData.push(el.toDataURL("image/png"));
});
return imgData;
}
To restore, you can use a FileReader to read the contents of the JSON back into the browser, then make <img>s for each layer, set img.src to the dataURLs in your JSON and from there you can draw the <img> into onload canvases.
Add a reference (src URL) for the image to the instance, then serialize the instance array as JSON and use f.ex. localStorage.
localStorage.setItem("currentwork", JSON.stringify(collage.instances));
Then to restore you would need to do:
var tmp = localStorage.getItem("currentwork");
collage.instances = tmp ? JSON.parse(tmp) : [];
You then need to iterate through the array and reload the images using proper onload handling. Finally re-render everything.
Can you store image data on client? Yes, but not recommended. This will take a lot of space and if too much you will not be able to save all the data, the user may refuse to allow more storage space etc.
Keeping a link to the image on a server is a better approach for these things IMO. But if you disagree, look into IndexedDB (or WebSQL although deprecated) to have local storage which can be expanded in available space. localStorage can only hold between 2.5 - 5 mb, ie. no image data and only strings. Each char takes two bytes, data-uris adds 33% on top, so this will run empty pretty fast...

Load file into IMAGE object using Phantom.js

I'm trying to load image and put its data into HTML Image element but without success.
var fs = require("fs");
var content = fs.read('logo.png');
After reading content of the file I have to convert it somehow to Image or just print it to canvas. I was trying to conver binary data to Base64 Data URL with the code I've found on Stack.
function base64encode(binary) {
return btoa(unescape(encodeURIComponent(binary)));
}
var base64Data = 'data:image/png;base64,' +base64encode(content);
console.log(base64Data);
Returned Base64 is not valid Data URL. I was trying few more approaches but without success. Do you know the best (shortest) way to achieve that?
This is a rather ridiculous workaround, but it works. Keep in mind that PhantomJS' (1.x ?) canvas is a bit broken. So the canvas.toDataURL function returns largely inflated encodings. The smallest that I found was ironically image/bmp.
function decodeImage(imagePath, type, callback) {
var page = require('webpage').create();
var htmlFile = imagePath+"_temp.html";
fs.write(htmlFile, '<html><body><img src="'+imagePath+'"></body></html>');
var possibleCallback = type;
type = callback ? type : "image/bmp";
callback = callback || possibleCallback;
page.open(htmlFile, function(){
page.evaluate(function(imagePath, type){
var img = document.querySelector("img");
// the following is copied from http://stackoverflow.com/a/934925
var canvas = document.createElement("canvas");
canvas.width = img.width;
canvas.height = img.height;
// Copy the image contents to the canvas
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
// Get the data-URL formatted image
// Firefox supports PNG and JPEG. You could check img.src to
// guess the original format, but be aware the using "image/jpg"
// will re-encode the image.
window.dataURL = canvas.toDataURL(type);
}, imagePath, type);
fs.remove(htmlFile);
var dataUrl = page.evaluate(function(){
return window.dataURL;
});
page.close();
callback(dataUrl, type);
});
}
You can call it like this:
decodeImage('logo.png', 'image/png', function(imgB64Data, type){
//console.log(imgB64Data);
console.log(imgB64Data.length);
phantom.exit();
});
or this
decodeImage('logo.png', function(imgB64Data, type){
//console.log(imgB64Data);
console.log(imgB64Data.length);
phantom.exit();
});
I tried several things. I couldn't figure out the encoding of the file as returned by fs.read. I also tried to dynamically load the file into the about:blank DOM through file://-URLs, but that didn't work. I therefore opted to write a local html file to the disk and open it immediately.

Save captured png as arraybuffer

I'm trying to save an image to dropbox, and having trouble getting the convertion correct. I have an img (captured using this sample) and I want to store it to dropbox that accepts an ArrayBuffer (sample here)
This is the code I found that should to the two conversions, first to a base64, then into a ArrayBuffer
function getBase64Image(img) {
// Create an empty canvas element
var canvas = document.createElement("canvas");
canvas.width = img.width;
canvas.height = img.height;
// Copy the image contents to the canvas
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
// Get the data-URL formatted image
// Firefox supports PNG and JPEG. You could check img.src to
// guess the original format, but be aware the using "image/jpg"
// will re-encode the image.
var dataURL = canvas.toDataURL("image/png");
return dataURL.replace(/^data:image\/(png|jpg);base64,/, "");
}
function base64ToArrayBuffer(string_base64) {
var binary_string = window.atob(string_base64);
var len = binary_string.length;
var bytes = new Uint8Array(len);
for (var i = 0; i < len; i++) {
var ascii = binary_string.charCodeAt(i);
bytes[i] = ascii;
}
return bytes.buffer;
}
Saving is started like this
var img = $('#show-picture')[0];
var data = base64ToArrayBuffer( getBase64Image(img));
dropbox.client.writeFile(moment().format('YYYYMMDD-HH-mm-ss')+'.png', data, function (error, stat) {
if (error) {
return dropbax.handleError(error);
}
// The image has been succesfully written.
});
Problem is that I get a corrupted file saved, and is a bit confused on what's wrong.
*EDIT *
Here's the link to the original file
https://www.dropbox.com/s/ekyhvu2t6d8ldh3/original.PNG and here to the corrupted. https://www.dropbox.com/s/f0oevj1z33brpur/20131219-22-23-14.png
I'm using this version of the dropbox.js: //cdnjs.cloudflare.com/ajax/libs/dropbox.js/0.10.2/dropbox.min.js
As you can see the corrupted is slighty bigger 23,3KB vs 32,6 KB
Thanks for any help
Larsi
Moving my comment to an answer, since it seems that this works in the latest Datastore JS SDK but perhaps not in dropbox.js 0.10.2.
What browser and what version of the Dropbox library? And what's wrong with the image that's saved? (I assume by "corrupted" you mean that it won't open in whatever tool you're using... any more hints? Is the file size reasonable?) I just did a very similar test (toDataURL, atob, and Uint8Array) with Chrome on OS X and dropbox.com/static/api/dropbox-datastores-1.0-latest.js, and it seems to work.

Get Base64 encode file-data from Input Form

I've got a basic HTML form from which I can grab a bit of information that I'm examining in Firebug.
My only issues is that I'm trying to base64 encode the file data before it's sent to the server where it's required to be in that form to be saved to the database.
<input type="file" id="fileupload" />
And in Javascript+jQuery:
var file = $('#fileupload').attr("files")[0];
I have some operations based on available javascript: .getAsBinary(), .getAsText(), .getAsTextURL
However none of these return usable text that can be inserted as they contain unusable 'characters' - I don't want to have a 'postback' occur in my file uploaded, and I need to have multiple forms targeting specific objects so it's important I get the file and use Javascript this way.
How should I get the file in such a way that I can use one of the Javascript base64 encoders that are widely available!?
Thanks
Update - Starting bounty here, need cross-browser support!!!
Here is where I'm at:
<input type="file" id="fileuploadform" />
<script type="text/javascript">
var uploadformid = 'fileuploadform';
var uploadform = document.getElementById(uploadformid);
/* method to fetch and encode specific file here based on different browsers */
</script>
Couple of issues with cross browser support:
var file = $j(fileUpload.toString()).attr('files')[0];
fileBody = file.getAsDataURL(); // only would works in Firefox
Also, IE doesn't support:
var file = $j(fileUpload.toString()).attr('files')[0];
So I have to replace with:
var element = 'id';
var element = document.getElementById(id);
For IE Support.
This works in Firefox, Chrome and, Safari (but doesn't properly encode the file, or at least after it's been posted the file doesn't come out right)
var file = $j(fileUpload.toString()).attr('files')[0];
var encoded = Btoa(file);
Also,
file.readAsArrayBuffer()
Seems to be only supported in HTML5?
Lots of people suggested: http://www.webtoolkit.info/javascript-base64.html
But this only returns an error on the UTF_8 method before it base64 encodes? (or an empty string)
var encoded = Base64.encode(file);
It's entirely possible in browser-side javascript.
The easy way:
The readAsDataURL() method might already encode it as base64 for you. You'll probably need to strip out the beginning stuff (up to the first ,), but that's no biggie. This would take all the fun out though.
The hard way:
If you want to try it the hard way (or it doesn't work), look at readAsArrayBuffer(). This will give you a Uint8Array and you can use the method specified. This is probably only useful if you want to mess with the data itself, such as manipulating image data or doing other voodoo magic before you upload.
There are two methods:
Convert to string and use the built-in btoa or similar
I haven't tested all cases, but works for me- just get the char-codes
Convert directly from a Uint8Array to base64
I recently implemented tar in the browser. As part of that process, I made my own direct Uint8Array->base64 implementation. I don't think you'll need that, but it's here if you want to take a look; it's pretty neat.
What I do now:
The code for converting to string from a Uint8Array is pretty simple (where buf is a Uint8Array):
function uint8ToString(buf) {
var i, length, out = '';
for (i = 0, length = buf.length; i < length; i += 1) {
out += String.fromCharCode(buf[i]);
}
return out;
}
From there, just do:
var base64 = btoa(uint8ToString(yourUint8Array));
Base64 will now be a base64-encoded string, and it should upload just peachy. Try this if you want to double check before pushing:
window.open("data:application/octet-stream;base64," + base64);
This will download it as a file.
Other info:
To get the data as a Uint8Array, look at the MDN docs:
https://developer.mozilla.org/en/DOM/FileReader
My solution was use readAsBinaryString() and btoa() on its result.
uploadFileToServer(event) {
var file = event.srcElement.files[0];
console.log(file);
var reader = new FileReader();
reader.readAsBinaryString(file);
reader.onload = function() {
console.log(btoa(reader.result));
};
reader.onerror = function() {
console.log('there are some problems');
};
}
I used FileReader to display image on click of the file upload button not using any Ajax requests. Following is the code hope it might help some one.
$(document).ready(function($) {
$.extend( true, jQuery.fn, {
imagePreview: function( options ){
var defaults = {};
if( options ){
$.extend( true, defaults, options );
}
$.each( this, function(){
var $this = $( this );
$this.bind( 'change', function( evt ){
var files = evt.target.files; // FileList object
// Loop through the FileList and render image files as thumbnails.
for (var i = 0, f; f = files[i]; i++) {
// Only process image files.
if (!f.type.match('image.*')) {
continue;
}
var reader = new FileReader();
// Closure to capture the file information.
reader.onload = (function(theFile) {
return function(e) {
// Render thumbnail.
$('#imageURL').attr('src',e.target.result);
};
})(f);
// Read in the image file as a data URL.
reader.readAsDataURL(f);
}
});
});
}
});
$( '#fileinput' ).imagePreview();
});
Inspired by #Josef's answer:
const fileToBase64 = async (file) =>
new Promise((resolve, reject) => {
const reader = new FileReader()
reader.readAsDataURL(file)
reader.onload = () => resolve(reader.result)
reader.onerror = (e) => reject(e)
})
const file = event.srcElement.files[0];
const imageStr = await fileToBase64(file)
Complete example
Html file input
<style>
.upload-button {
background-color: grey;
}
.upload-button input{
display:none;
}
</style>
<label for="upload-photo" class="upload-button">
Upload file
<input
type="file"
id="upload-photo"
</input>
</label>
JS Handler
document.getElementById("upload-photo").addEventListener("change", function({target}){
if (target.files && target.files.length) {
try {
const uploadedImageBase64 = await convertFileToBase64(target.files[0]);
//do something with above data string
} catch() {
//handle error
}
}
})
function convertFileToBase64(file) {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.readAsDataURL(file);
reader.onload = () => resolve(reader.result);
// Typescript users: use following line
// reader.onload = () => resolve(reader.result as string);
reader.onerror = reject;
});
}
After struggling with this myself, I've come to implement FileReader for browsers that support it (Chrome, Firefox and the as-yet unreleased Safari 6), and a PHP script that echos back POSTed file data as Base64-encoded data for the other browsers.
So why dont you agree with user of the system to select an image from a known folder? Or they can set their choice folder for images.
Most browsers wont support full path but you can get the filename eg "image.png"
Using PHP inbuilt function to encode:
#$picture_base64 = base64_encode( file_get_contents($image_file_name) );
The sign # will suppress error if path is not found but the result will be a null for variable $picture_base64 so i guess youre ok with null like i am else do a check for null before proceeding.
In html you can select an image filename to the input e.g. "image.png" ( but not the full path)
<input type="file" name="image" id="image" >
Then in PHP you can do:
$path = "C:\\users\\john\\Desktop\\images\\"
#$picture_base64 = base64_encode( file_get_contents( $path. $_POST['image']);
Then $picture_base64 will be something like
"AQAAAAMAAAAHAAAADwAAAB8AAAA/AAAAfwAAAP8AAAD/AQAA/w"
I've started to think that using the 'iframe' for Ajax style upload might be a much better choice for my situation until HTML5 comes full circle and I don't have to support legacy browsers in my app!

Categories

Resources