I have a very basic question: how can I read an image file in javascript and get access to its pixel data as arrays? I am writing a local script to be run as node myscript.js, so no need for any web-stuff.
Basically I need a javascript equivalent of the following python 2-liner, preferably with as few external dependencies as possible:
import skimage.io
image = skimage.io.imread('someimage.file',as_gray=False).astype('float64')
# do stuff to image
You can read a file/image using a FileReader() object in javascript.
From the documentation:
The FileReader object lets web applications asynchronously read the contents of files (or raw data buffers) stored on the user's computer, using File or Blob objects to specify the file or data to read.
And after reading an image, you can use a third-party library to manipulate it. Here are a few good libraries:
Caman JS
glfx.js
Grafi.js
Jimp
A basic example of reading file:
var fs = require('fs');
fs.readFile('image.jpg', function(err, data){
console.log(data);
});
Related
I have an HTML 5 game. I'm using webpack/typescript for development.
There is some data I have which I was including by using require like the following
const dataJson = require('scripts/data/data.json');
I would like to do the equivalent, except with bson. I tried the naive approach of doing something like this
const dataJson = require('scripts/data/data.bson');
but this of course fails since there is no loader (won't compile with currently no loaders are configured to process this file.).
I'd like to then include the file locally, load the file and then deserialize the bson. Or I'd like to embed the bson like when using require. This is some tool generated data, so it will be in a data file.
I haven't been able to figure it out. I've tried the following. But this results in the result containing either the bits of File or what looks like the content type (if done as readAsDataURL).
What I have tried
const file = new File(['data'], 'assets/data.bson', { type: 'application/bson' });
const reader = new FileReader();
reader.onload=(theFile) => {
if (theFile.target) {
console.log(theFile.target.result);
}
} ;
reader.readAsDataURL(file);
//reader.readAsBinaryString(file);
What is the correct method to load a local binary file? Presumably, once I have the data, I can just call deserialize from the bson package.
Okay, I'm adding some corrections here.
My method to read the files is wrong. I know understand File will actually create a file. So when this is passed to FileReader it gets the value of the both bits passed in.
I have since discovered I can get the local files 2 ways. I can use XMLHttpRequest as well as the raw-loader loader.
However once I do this. I cannot seem to convert the contents into JSON using bson. Any variant of deserialize, serialize, parse, or stringify has some issue.
Does anyone happen to have the correct method to convert the BSON contents into either a Javascript Object?
Note that the BSON is generated from python using pymongo. My code to generate the BSON is the following.
with open(args.output_bson, 'wb') as fp:
encoded = bson.encode(data_meta.to_dict())
fp.write(encoded)
to_dict is a dictionary. I output both the JSON (using json) and BSON.
I also tested the file with bsondump and it does indeed convert to JSON. So it does appear the file I've loaded is valid.
you can use bson
then call:
BSON.deserialize(theFile.target.result);
Is there a way I can read a local JSON file created by the server side into an object in js. So that I can append that data into HTML tags and further create a table of the all that content.
I tried searching a lot and all I found was ajax and jquery which I haven't learnt yet.
From the comments - since you mentioned you already have a file system module on a node.js app that saves the said file, am simply making some assumptions here and giving you an idea on how to read the file. Assuming your file is a json file, using the file system module
var fs = require('fs')
var file = '{yourfolder}/sample.json'
var json = JSON.parse(fs.readFileSync(file, 'UTF-8'))
Trying to find a way to extract IPTC data from an image file buffer, there are existing libraries available on npm that allow you to open and read a file from the local filesystem but i am storing files on AWS S3 and would prefer to use buffers instead of creating unnecessary disk writes.
Not sure where to start, maybe start looking through how this module works:
https://www.npmjs.com/package/extract-iptc
And create my own module? Or is there an easier way that i've missed?
I was able to extract IPTC data by using
var iptc = require('node-iptc');
var iptc_data = iptc(imageData);
Also there's an isomorphic library exifr that works in both Node.js and browser. And it works with the new HEIC image format as well.
exifr.parse(input, {iptc: true}).then(output => {
console.log('IPTC', output)
})
It parses multiple formats (TIFF/EXIF, ICC, IPTC, XMP, JFIF) but IPTC isn't enabled by default so you need to enabled it in options as seen in the example.
I have a reference to a javascript File object (image) which was provided by the user from a "open file dialog". How do I load this image file into a css background-image without having to read all data into a base64-string first?
The examples I have found use a FileReader to read the data and then load that into the css-tag but this seems like a bit of ineffective use of memory. Since I have the File-reference it would be nice if I could pass that into the css-tag somehow instead and let the image be streamed into memory instead. The url()-wrapper for "background-image" supports local filenames but for security reasons the full path of the File is not available to my script so I can't use that.
Any suggestions?
Let's say you have your File object in a variable called file.
var url = URL.createObjectURL(file)
yourElement.style.background = `url(${url})`
https://developer.mozilla.org/en-US/docs/Web/API/URL/createObjectURL
I'm writing a backup script that will pull a full copy of every file in a specific blob container in our Windows Azure blob storage. These files are not uploaded by me, I'm just writing a script that traverses the blob storage and downloads the files.
To speed up this process and skip unnecessary downloads, I'd like to request MD5s for the files before downloading them, and compare them with the already local files.
My problem: I can't find documentation anywhere detailing how to do this. I'm pretty sure the API supports it, I'm finding docs and answered questions related to other languages everywhere, but not for the Node.js Azure SDK.
My question: Is it possible, and if yes, how, to request an MD5 for the remote file blob through the Azure Node.js SDK, before downloading it? And is it faster than just downloading the file?
It is certainly possible to get blob's MD5 hash. When you list blobs, you'll get MD5 in blob's properties. See the sample code below:
var azure = require('azure');
var blobService = azure.createBlobService("accountname", "accountkey");
blobService.listBlobs("containername", function(error, blobs){
if(!error){
for(var index in blobs){
console.log(blobs[index].name );
console.log(blobs[index].properties['content-md5'] );
}
}
});
Obviously the catch is that blob should have this property set. If this property is not set, then an empty string is returned.