Saving png file in disk in Node.js - javascript

I have the data from what I believe is a png file on a variable like this:
console.log(name, file.value);
How do I properly save that to the filesystem? I've tried all imaginable combinations to save it but they all return something wrong, so the image cannot be opened:
import fsp from 'node:fs/promises';
await fsp.write(name, file.value, "binary");
await fsp.write(name, file.value, "utf8");
await fsp.write(name, Buffer.from([file.value], name), "utf8");
await fsp.write(name, Buffer.from([file.value], name), "binary");
// ...
Many of them create the file, but it seems it's wrong and cannot be opened. To get the data here, the original file lives on /logo.png so I'm doing on the front-end:
fetch("/logo.png")
.then((res) => res.blob())
.then((blob) => {
const form = new FormData();
form.append("hello", "world");
// ...
form.append("icon.png", new File([blob], "icon.png"));
Then to get the data on the front-end I'm doing:
const getBody = async (req) => {
return await new Promise((done) => {
const buffers = [];
req.on("data", (chunk) => {
buffers.push(chunk);
});
req.on("end", () => {
done(Buffer.concat(buffers).toString("utf8"));
});
});
};
const rawData = await getBody(req);
// Parse the rawData to get the fields etc
So, my two questions are:
What is this data/format called? Binary? can I convert this data format ("Binary?" "etc?

Related

Downloading an mp3 file from S3 and manipulating it results in bad file

I did a script that downloads a MP3 file from my S3 bucket and then manipulates in before download (Adding ID3 Tags).
It's working and the tags are injected properly, but the files corrupts as it seems and unplayable.
I still can see my tags trough MP3tag so it has data in it, but no audio is playing trough the file.
Heres my code,
Trying to figure it what went wrong
const downloadFileWithID3 = async (filename, downloadName, injectedEmail) => {
try {
const data = await s3Client.send(
new GetObjectCommand({
Bucket: "BUCKETNAME",
Key: filename,
})
);
const fileStream = streamSaver.createWriteStream(downloadName);
const writer = fileStream.getWriter();
const reader = data.Body.getReader();
const pump = () =>
reader.read().then(({ value, done }) => {
if (done) writer.close();
else {
const arrayBuffer = value;
const writerID3 = new browserId3Writer(arrayBuffer);
const titleAndArtist = downloadName.split("-");
const [artist, title] = titleAndArtist;
writerID3.setFrame("TIT2", title.slice(0, -4));
writerID3.setFrame("TPE1", [artist]);
writerID3.setFrame("TCOM", [injectedEmail]);
writerID3.addTag();
let taggedFile = new Uint8Array(writerID3.arrayBuffer);
writer.write(taggedFile).then(pump);
}
});
await pump()
.then(() => console.log("Closed the stream, Done writing"))
.catch((err) => console.log(err));
} catch (err) {
console.log(err);
}
};
Hope you can help me solve this wierd bug,
Thanks in advance!
Ok so i've figured it out, instead of using chunks of the stream itself i've used getSignedUrl from the s3 bucket it works.
Thanks everyone for trying to help out!

How to decompress gziped AWS gateway api lambda response in react?

I followed this answer and successfully used gzip to compress the data and avoid AWS lambda 6MB response limitation. But I can't figure out how to decompress and convert to the string after the response is received in front end react app. My file is a log file.
I tried to solve:
// this is my “response.json()” will look like
const baseData = {
“data”: “H4sIAAAAA.....”
}
// decode the base64 encoded data
const gzipedData = Buffer.from(baseData.data, “base64");
const ungzip = async (input) => {
return new Promise((resolve, reject) =>
zlib.gzip(input, (err, data) => {
if (err) {
reject(err);
}
resolve(data);
})
);
};
// unzip and return a Buffer
const ungzipedData = await ungzip(gzipedData);
// convert Buffer to string
const buf = Buffer.from(ungzipedData, ‘utf8’);
console.log(buf.toString());
The result was something like this:
g#����r��.{�/)fx^�R�d�J%��y�c��P��...
I figured out just to use zlib.unzip and use util.promisify to return the final value as a promise.
If anyone knows any better solution (with pako maybe), please share, thank you!
import { Buffer } from 'buffer';
import zlib from "react-zlib-js";
import util from "util";
const getLog = async (itemName) => {
const response = await fetch(
"url?" +
new URLSearchParams({
some_param: "some_value",
})
);
if (!response.ok) {
throw new Error("Fail ....!");
}
const responseJson = await response.json();
const buffer = Buffer.from(responseJson.data, "base64");
const responseData = (await util.promisify(zlib.unzip)(buffer)).toString();
return responseData;
};

How to generate xlsx file trough Express with Exceljs and send it to client?

I want to separate controllers and services in my Express app, currently, I have a service that generates an XLSX file using ExcelJS, I want to reuse the service so, I don't want to pass the response object to the service, is there a way I can return the file from the service to the controller?
Right now I have the following
const generateXLSX = (res, data) => {
let baseFile = "path/to/template.xlsx";
let wb = new Excel.Workbook();
wb.xlsx
.readFile(baseFile)
.then(async () => {
// add data to template
}
await wb.xlsx.write(res);
res.end();
})
.catch((err) => {
console.log(err);
res.status(500);
});
};
In this function I'm using the response object in the service, I want to know if there's a way to return the file without using write(res), and send it in the controller
Your generateXLSX function could return a "pass-through" readable stream which you then pipe into the res object. Something like
const {PassThrough} = require("stream");
function generateXLSX(data) {
let baseFile = "path/to/template.xlsx";
let wb = new Excel.Workbook();
let readable = new PassThrough();
wb.xlsx
.readFile(baseFile)
.then(async function() {
// add data to template
await wb.xlsx.write(readable);
readable.end();
})
.catch((err) => {
readable.destroy(err);
});
return readable;
}
app.use("/path", function(req, res) {
generateXLSX(req.query.data).pipe(res);
});

Upload .vhd as Page-Blob to azure-blob-storage from Url

i have a bunch of VHD files stored on a private Server, which are accessible through a url.
I am trying upload these vhd files directly to my azure storage account using the azure javascript npm libraries. The vhds have to be uploaded as page-blobs. I tried using the method uploadPagesFromURL() of the pageblobClient but with no success. My code looks roughly like this:
async function uploadVHD(accessToken, srcUrl)
{
try {
// Get credentials from accessToken
const creds = new StorageSharedKeyCredential(storageAccount.name, storageAccount.key);
// Get blobServiceClient
const blobServiceClient = new BlobServiceClient(`https://${storageAccount.name}.blob.core.windows.net`, creds);
// Create Container
const containerClient = blobServiceClient.getContainerClient("vhd-images");
await containerClient.createIfNotExists();
const src = srcUrl.replace('https://', 'https://username:password#');
// Upload to blob storage
const pageBlobClient = containerClient.getPageBlobClient("Test.vhd");
// Get fileSize of vhd
const fileSize = (await axiosRequest(src, { method: "HEAD" })).headers["content-length"];
const uploadResponse = await pageBlobClient.uploadPagesFromURL(src, 0, 0, fileSize);
return uploadResponse;
} catch (error) {
return error;
}
});
It is not possible to upload the Page Blob with your URL directly. You need to read data from the url. Then upload using uploadPages method.
axios.get(URL, {
responseType: 'arraybuffer'
})
.then((response) => {
console.log(response.data)
console.log(response.data.length)
// upload page blob...
}).catch((error) => {
//handle error
});
// uploadPages method
const uploadResponse = pageBlobClient.uploadPages(data, 0, dataLength);

How to zip files PDF from a Storage in NodeJS

I need to create a zip file with any PDF what I recieved from Storage AWS, and I am trying do this with ADM-zip in NodeJS, but i cant read the final file.zip.
Here is the code.
var zip = new AdmZip();
// add file directly
var content = data.Body.buffer;
zip.addFile("test.pdf", content, "entry comment goes here");
// console.log(content)
// add local file
zip.addLocalFile(`./tmp/boletos/doc.pdf`);
// // get everything as a buffer
var willSendthis = zip.toBuffer();
console.log(willSendthis)
// // or write everything to disk
zip.writeZip("test.zip", `../../../tmp/boletos/${datastring}.zip`);
As it is this only creates a .zip for each file..zip
I was also facing this issue. I looked through a lot of SO posts. This is how I was able to create a zip with multiple files from download urls. Please keep in mind, I'm unsure this is best practice, or if this is going to blow up memory.
Create a zip folder from a list of id's of requested resources via the client.
const zip = new AdmZip();
await Promise.all(sheetIds.map(async (sheetId) => {
const downloadUrl = await this.downloadSds({ sheetId, userId, memberId });
if (downloadUrl) {
await new Promise((resolve) => https.get(downloadUrl, (res) => {
const data = [];
res.on('data', (chunk) => {
data.push(chunk);
}).on('end', () => {
const buffer = Buffer.concat(data);
resolve(zip.addFile(`${sheetId}.pdf`, buffer));
});
}));
} else {
console.log('could not download');
}
}));
const zipFile = zip.toBuffer();
I then used downloadjs in my React.js client to download.
const result = await sds.bulkDownloadSds(payload);
if (result.status > 399) return store.rejectWithValue({ errorMessage: result?.message || 'Error', redirect: result.redirect });
const filename = 'test1.zip';
const document = await result.blob();
download(document, filename, 'zip');

Categories

Resources