I am working on an app that uses sharp for processing photos.
Currently, when we resize and then write to buffer an image with sharp resize and toBuffer, by default the two of them wipe the EXIF data. We want to remove all metadata except for orientation (if it exists).
I've read sharp's documentation and withMetadata seems to be the candidate to achieve what I want, the problem is that withMetadata preserves all metadata and I just want the orientation of the original image.
The original line of code is
await this.sharpInstance.resize(maxDimension, maxDimension).max().toBuffer()
I think that what I want is something like
await this.sharpInstance.withMetadata().resize(maxDimension, maxDimension).max().withMetadata().toBuffer()
but only for orientation metadata.
I would really appreciated some help to solve this. Thanks very much!
Have you tried await this.sharpInstance.resize(maxDimension, maxDimension).max().withMetadata().toBuffer() as Sharp docs about withMetadata.
Edited:
I got that. So as withMetadata, first we need to save the orientation metadata and then assign to output buffer later:
// First, save the orientation for later use
const { orientation } = await this.sharpInstance.metadata();
// Then output to Buffer without metadata
// then create another Sharp instance
// from output Buffer which doesn't have metadata
// and assign saved orientation along with it
sharp(this.sharpInstance.toBuffer())
.withMetadata({ orientation }).toBuffer();
A workaround for those not specifically interested in keeping the file's original rotation plus rotation metadata: Rotate the image so that the file has no metadata, but the rotation is correct.
To do this, it is not necessary to read the metadata, if you call the rotate() method without parameters, it will look up the information in the metadata and perform the appropriate rotation.
Related
I am trying to generate bmp images (graphs) in nodeJS for use in C to display on a low-res display.
Creating the graphs (with d3.js) and converting the svg-code to a bmp works fine (via svg2img and Jimp) and everything appears correctly in the bmp-file.
When I try to display it on the low-res screen, the C code reports that the image height is negative and fails. Now I read that bmp's can be stored top-to-bottom or bottom-to-top (here for example).
In which direction does Jimp work and how could it be reversed?
I have converted the bmp that I generated with Jimp again (using XnConvert) and tried the resulting bmp, which did successfully display on the low-res screen.
In node:
svg2img(body.select('.container').html(), function(error, buffer) {
//returns a Buffer
fs.writeFile('foo1.png', buffer, function(){
Jimp.read('./foo1.png')
.then(image => {
// Do stuff with the image.
image.background(0xffffffff).resize(1200, 500);
image.dither565();
image.write("./public/test.bmp"); // save
})
.catch(err => {
// Handle an exception.
res.send("error1");
});
});
});
In the C-script logs:
BMP_cfSize:1800054
BMP_cfoffBits:54
BMP_ciSize:40
BMP_ciWidth:1200
BMP_ciHeight:-500 <---------------------
//etc.
*****************************************
total_length:1800000,1800000
bytesPerLine = 3600
imageSize = -1800000
Is there a way to revert the order in Jimp? Or am I missing something else?
Or would it be easier to try to revert the order in the C-library (I'm not very good with C)?
As pointed out in the comments, making the negative value positive in the bmp-js library that Jimp uses in this line flipped the image but also solved the issue with the C library that required that order.
Using image.flip(false, true) in Jimp, I could keep the correct orientation in the final result.
Issue reported in the bmp-js GitHub.
I am using JSZip to make a program that generates the image data from a canvas element and puts the image into a zip file.
Right now, it is turning the canvas image into an DataURL. Then, I get rid of the part of the resulting string that says data:image/png;base64,. Now, there is nothing left but the base64 data. I then use atob to change it to ascii.
It seems like putting the remaining string into an image file should work, but the generated ascii text is not correct. Many parts of it are correct, but something is not right.
Here is my code:
//screen is the name of the canvas.
var imgData = screen.toDataURL();
imgData = imgData.substr(22);
imgData = atob(imgData);
console.log(imgData);
Here is an image of the resulting png file (in notepad):
incorrect text http://upurs.us/image/71280.png
And here is what is should look like:
correct text http://upurs.us/image/71281.png
As you can see, there are slight differences, and I have no idea why. I know absolutely nothing about the PNG file type or ASCII, so I don't know where to go from here.
If you want to see all my work, here's the project:
http://s000.tinyupload.com/download.php?file_id=09682586927694868772&t=0968258692769486877226111
EDIT: My end goal is to have a program that exports every single frame of a canvas animation so that I can use them to make a video. If anyone knows a program that does that, please post it!
When you use zip.file("hello.png", imgData) where imgData is a string, you tell JSZip to save an (unicode) string. Since it's not a textual content, you get a corrupted content. To fix that, you can do:
zip.file("hello.png", imgData, {binary: true})
As dandavis suggested, using a blob will be more efficient here. You can convert a canvas to a blob with canvas.toBlob:
screen.toBlob(function (blob) {
zip.file("hello.png", blob);
});
The only caveat is that toBlob is asynchronous: you should disable the download button during that time (or else, if a user is quick enough or the browser slow enough, zip.file won't be executed and you will give an empty zip to your user).
document.getElementById("download_button").disabled = true;
screen.toBlob(function (blob) {
zip.file("hello.png", blob);
document.getElementById("download_button").disabled = false;
});
I'm trying to determine the width of an image stored in GridFS on meteor so that I can re-size a modal dialog.
I have a helper for the template
Template.projectImageModalInner.image = function() {
var imageId = Session.get("selectedImageId");
//console.log("projectImageModalInner imageId: " +imageId);
var image = imageFS.findOne({_id: imageId});
url = image.fileHandler.default1.url;
console.log(url);
console.log(Imagemagick.identify(url));
return imageFS.findOne({_id: imageId});
}
which returns the correct image to the dialog to display, but I'm really having problems getting the size. The call to Imagemagick.identify blows up with an error saying "cannot call method identify on undefined" yet the line above prints the correct url.
The console log for the image url shows
/cfs/images/i5mSRED6mYgo2vK84_default1.jpg
which is the correct URL of the image being displayed in the template.
I want to set a session variable eventually with the image width so that the dialog can be dynamically sized.
I have tried getting this size from the html (with no joy), from other helpers but so far nothing is working.
Can anyone either point out what I'm doing wrong here, OR, suggest another way?
I think you have to use the graphicsmagick / imagemagick library on the server-side after the upload to get the image size.
First, use the transformWrite function on the server collection to get image dimensions with gm.size(), then attach those values to the fileObj.metadata.
Check those two how-to's explaining all the relevant parts:
https://github.com/CollectionFS/Meteor-CollectionFS/wiki/How-to:-Update-existing-file's-metadata
https://github.com/CollectionFS/Meteor-CollectionFS/wiki/File-Manipulation
I would like to know if there is any framework that allow me to store canvas drawn objects, load and manipulate, or if there isnt, how to do such process (if possible).
My objective is to procceed this steps:
Draw on canvas with mouse/touch on mobile devices
Store the drawn object in a way I can manipulate later (not as image file)
(with store I mean to save it remotely on any kind of source)
Load the drawn object to canvas, and be able to manipulate him (bending a line p.ex)
You can use base64 + localStorage:
var canvas = document.getElementsByTagName('canvas')[0];
var pngBase64 = canvas.toDataURL();
localStorage.setItem('myCanvasData', pngBase64);
I have ZigJS running in the browser and everything is working well, but I want to record the Kinect webcam images in order to play them back as a recorded video. I've looked through the documentation at http://zigfu.com/apidoc/ but cannot find anything related to the RGB information.
However, this SO answer leads me to believe this is possible:
We also support serialization of the depth and RGB image into canvas objects in the browser
Is it possible to capture the RGB image data from ZigJS and if so how?
Assuming you have plugin version 0.9.7, something along the lines of:
var plugin = document.getElementById("ZigPlugin"); // the <object> element
plugin.requestStreams(false, true, false); // tell the plugin to update the RGB image
plugin.addEventListener("NewFrame", function() { // triggered every new kinect frame
var rgbImage = Base64.decode(plugin.imageMap);
// plugin.imageMapResolution stores the resolution, right now hard-coded
// to QQVGA (160x120 for CPU-usage reasons)
// do stuff with the image
}
Also, I recommend you take the base64 decoder I wrote, from, say, http://motionos.com/webgl because it's an order of magnitude faster than the random javascript decoders I found via Google.
If you have version 0.9.8 of the plugin, there was an API change, so you should call:
plugin.requestStreams({updateImage:true});