Convert Buffer (image) to file - javascript

I'm looking for the best way to send image files to my server using Apollo Express, and Node.
Getting the information there doesn't seem to be an issue, I convert the object into a string but can't find out how to convert it back to a regular file object to store away.
What I have so far;
JS - let buffer = await toBase64(file);
Through Apollo server..
Node - let buffer = Buffer.from(args.image, 'base64');
This gives me a Buffer. I'm unsure how to proceed with NodeJS to convert this back to a file object.
Thanks

I hope this will be helpfull for you
const file = new File([
new Blob(["decoded_base64_String"])
], "output_file_name");

You can use one of the various write or writeFile methods which accept a Buffer.
const fs = require("fs");
let buffer = Buffer.from(
"iVBORw0KGgoAAAANSUhEUgAAAAgAAAAGCAIAAABxZ0isAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAAQSURBVBhXY/iPAwygxP//AAjcj3EdtT3BAAAAAElFTkSuQmCC",
"base64"
);
fs.writeFile("pic.png", buffer, (err) => {
if (err) throw err;
console.log("The file has been saved!");
});

Related

How to save big object to file nodejs?

I have a big object that I need to send from server (nodejs) to client.
But every time I try send I get "invalid string length" error.
And it's ok, because the object is really big. That's why I'd like to save it to file and then send the file to client.
I don't know the depth of the object. The object itself is and octree.
I don't have any code of saving an object to file, because every time I think about this it leads me to stringify the object, that latter leads to "invalid string length".
Here is a screenshot of the object. Every q(n) key has the same recursive structure as result key.
Thanks!
Firstly try to save in a JSON file and send JSON file directly to client-side
const fs = require('fs');
fs.writeFileSync("file_name.json", data);
res.header("Content-Type",'application/json');
res.sendFile(path.join(__dirname, 'file_name.json'));
A good solution to handling large data transfer between server and client is to stream the data.
Pipe the result to the client like so.
const fs = require('fs');
const fileStream = fs.createReadStream("path_to_your_file.json");
res.writeHead(206, {'Content-Type': 'application/json'})
fileStream.pipe(response)
or follow this blog which stringifies each element at a time and concatenates them.
I will suggest you stream it though.
You could try something like this, I'm unsure if Express would throw the same length error by using res.json.
// Assuming you're using Express and data is valid JSON.
res.json(data);

How can I read a file from a path, in Javascript?

There are a lot of solutions that are based on the fetch api or the XMLHttpRequest, but they return CORS or same-origin-policy errors.
The File/Filereader API works out of the box , but only for files chosen by the user via a input file (because that is the only way to import them as a File obj)
Is there a way to do something simple and minimal like
const myfile = new File('relative/path/to/file') //just use a path
const fr = new FileReader();
fr.readAsText(myfile);
Thanks
Try the following JS, this will use fs to read the file and if it exists it will turn it into a string and output to console. You can change it up to however you'd like.
var fs = require('fs');
fs.readFile('test.txt', 'utf8', function(err, data) {
if (err) {
return console.log(err);
}
console.log(data);
});

How to store byte array into local file in JavaScript?

I was working with openssl certificate.
So what I want to achieve is I download the certificate from my API, it returns me bytes arrays. For example:
0�
g0�
' *�H��
��
�
0�
And I try to write these into a file, and use some other function to read and convert to PEM file so I can access it with another API. (The initial format of this cert is PFX).
I've checked and if I download the certificate via Postman, via the 'Send and Download' button I get something like this:
0‚
g0‚
' *†H†÷
 ‚
‚
0‚
It is slightly different than what I direct wrote into a file. How can I convert from that to this? I faced error from the next step is because this is not a valid PFX file. The error from the other API reads as below:
DecodeToAsn:
ASN.1 length cannot be more than 4 bytes in definite long-form.
This error typically occurs when trying to decode data that is not ASN.1
A common cause is when decrypting ASN.1 data with an invalid password,
which results in garbage data. An attempt is made to decode the garbage bytes
as ASN.1, and this error occurs...
--DecodeToAsn
Failed to decode PFX ASN.1 for integrity verification.
So how can I store bytes arrays into local file correctly? Or is there any way to convert the byte arrays to what I have via Postman?
As of now I only write the bytearray directly to the file, below are the codes:
async function downloadCertificate() {
try {
let pfx = new chilkat.Pfx();
let downloadedPfxByteArray = await Api.downloadCertificate(id);
let pfxFileLocation = `${process.cwd()}\\media\\CERTFILE.pfx`;
fs.writeFileSync(pfxFileLocation, downloadedPfxByteArray);
pfx.LoadPfxFile(pfxFileLocation, 'password');
let strPem = pfx.ToPem();
console.log(strPem);
return pemValue;
} catch (error) {
console.log(`READ PFX FAIL ! ${error}`);
}
}
Thanks for reading and appreciates if anyone could help!
#!/usr/bin/env node
const fetch = require('node-fetch');
const fs = require('fs').promises;
async function main() {
await fs.writeFile(
'/tmp/so-58603015.pfx',
Buffer.from(await (await fetch('…url…')).arrayBuffer())
);
}
main();

How to use GraphicsMagick with node to send a new image file to AWS S3?

I am trying to upload an image to S3 but I keep finding that phone images have rotated images and I learned this is due to EXIF data. I found this library called graphicsmagick which purports to be able to get rid of EXIF data and I also decided I wanted to reduce the images sizes to 500px wide and whatever height results.
The issue I can't figure out is how to grab the file after it's been changed. It seems that all of graphicsmagick examples show writing the image to a file on disk, but I want to grab the file data and upload it to AWS S3.
So far I have:
let file_extension = file.name.split('.').pop();//grab the file extension of for saving in the db
let key = `${user_id}/${UUID()}.${file_extension}`; //create a unique key to save in S3 based on users id
let params = {Bucket: S3_name, Key: key, Body: file.data};
//resize image
let new_image = gm(file.data)
.resize(500)
.noProfile()
.write() <-- this is as far as I got.
//upload
let result = new Promise(resolve=>{
s3.upload(params, function(err, result){
if (err) {
throw new Error('Could not upload photo');
}else {
resolve(result);
}
})
});
result = await result;
From gm docs:
Image output
write - writes the processed image data to the specified filename
stream - provides a ReadableStream with the processed image data
toBuffer - returns the image as a Buffer instead of a stream
So in your case instead of .write() you can use:
.toBuffer('png',function (err, buffer) {
if (err) return throw err;
console.log('done!');
})
Now you got buffer which can be used to as Body to upload to S3 logic

Check if file is corrupted with node.js

There are some way to check file is corrupted with node.js?
I tried many File System methods, like fs.readFile, fs.open abd fs.access but all them returns ok status, and I'm sure my file is corrupted in my tests.
To be more clear, my objective is to check if PDF is readable (not only check if can be generated), if can be opened. I damaged the file here to test.
You could try to parse it with a tool like this and confirm if it was successful.
To expand on that a little, here's some example code lifted from the link:
let fs = require('fs'),
PDFParser = require("pdf2json");
let pdfParser = new PDFParser();
pdfParser.on("pdfParser_dataError", errData => console.error(errData.parserError) );
pdfParser.on("pdfParser_dataReady", pdfData => {
fs.writeFile("./pdf2json/test/F1040EZ.json", JSON.stringify(pdfData));
});
pdfParser.loadPDF("./pdf2json/test/pdf/fd/form/F1040EZ.pdf");
Actually, u can use another npm to check file corruption of pdf.
npm i pdf-parse
const pdfParser = require('pdf-parse')
try {
let bufferData = fs.readFileSync(`${target}/corrupted.pdf`)
pdfParser(bufferData).then((data) => {
// do something with data
}).catch((error) => {
console.log(error.message)
})
} catch (error) {
console.log(error)
}
For corrupted file the error might seem like:
Warning: Indexing all PDF objects
Invalid PDF structure

Categories

Resources