I need to create a zip file with any PDF what I recieved from Storage AWS, and I am trying do this with ADM-zip in NodeJS, but i cant read the final file.zip.
Here is the code.
var zip = new AdmZip();
// add file directly
var content = data.Body.buffer;
zip.addFile("test.pdf", content, "entry comment goes here");
// console.log(content)
// add local file
zip.addLocalFile(`./tmp/boletos/doc.pdf`);
// // get everything as a buffer
var willSendthis = zip.toBuffer();
console.log(willSendthis)
// // or write everything to disk
zip.writeZip("test.zip", `../../../tmp/boletos/${datastring}.zip`);
As it is this only creates a .zip for each file..zip
I was also facing this issue. I looked through a lot of SO posts. This is how I was able to create a zip with multiple files from download urls. Please keep in mind, I'm unsure this is best practice, or if this is going to blow up memory.
Create a zip folder from a list of id's of requested resources via the client.
const zip = new AdmZip();
await Promise.all(sheetIds.map(async (sheetId) => {
const downloadUrl = await this.downloadSds({ sheetId, userId, memberId });
if (downloadUrl) {
await new Promise((resolve) => https.get(downloadUrl, (res) => {
const data = [];
res.on('data', (chunk) => {
data.push(chunk);
}).on('end', () => {
const buffer = Buffer.concat(data);
resolve(zip.addFile(`${sheetId}.pdf`, buffer));
});
}));
} else {
console.log('could not download');
}
}));
const zipFile = zip.toBuffer();
I then used downloadjs in my React.js client to download.
const result = await sds.bulkDownloadSds(payload);
if (result.status > 399) return store.rejectWithValue({ errorMessage: result?.message || 'Error', redirect: result.redirect });
const filename = 'test1.zip';
const document = await result.blob();
download(document, filename, 'zip');
Related
Apologies if this is straight-forward, I'm very much not a software developer!
I have a web app using Node (Node 16.17.0, npm 8.13.2). I have HTML files that I upload to an Azure Blob Storage container.
I would like to serve the files directly from the storage container to a user.
In the past, I've typically used something like this:
app.get(
'/analysis/example', async (req, res) => {
const a = path.join(__dirname + '/app/analysis/example_file.html');
res.sendFile(a);
}
);
However, I'm struggling a bit finding the appropriate documentation or examples to serve a file from blob storage.
I have managed to find a way to print the list of files and the file itself to the console - so I know for sure that I've managed to gain access properly (Azure documentation is quite good) - but I'm just not sure how to make sure the file is in an appropriate state to be served back.
I've tried this:
// GAIN ACCESS TO THE APPROPRIATE STORAGE ACCOUNT
const blobServiceClient = BlobServiceClient.fromConnectionString(
process.env.AZURE_STORAGE_CONNECTION_STRING
);
const containerClient = blobServiceClient.getContainerClient(
process.env.AZURE_STORAGE_CONTAINER
);
const blobClient = containerClient.getBlobClient(
process.env.AZURE_STORAGE_BLOB
);
// A FUNCTION TO HELP TURN THE STREAM INTO TEXT (TURN THIS INTO SOMETHING ELSE?)
async function streamToText(readable) {
readable.setEncoding('utf8');
let data = '';
for await (const chunk of readable) {
data += chunk;
}
return data;
};
// SERVE THE HTML FILE
app.get(
'/analytics/example', async (req, res) => {
// THIS CHUNK SUCCESSFULLY LISTS AVAILABLE FILES IN THE CONSOLE
// for await (const blob of containerClient.listBlobsFlat()) {
// console.log("\t", blob.name);
// };
const blobDownload = await blobClient.download(0);
const blob = await streamToText(blobDownload.readableStreamBody);
res.sendFile(blob);
}
}
);
I've also tried the final chunk below (I found an online resource that mentioned that DOMParser wouldn't work with Node):
// SERVE THE HTML FILE
app.get(
'/analytics/example', async (req, res) => {
// THIS CHUNK SUCCESSFULLY LISTS AVAILABLE FILES IN THE CONSOLE
// for await (const blob of containerClient.listBlobsFlat()) {
// console.log("\t", blob.name);
// };
const blobDownload = await blobClient.download(0);
const blob = await streamToText(blobDownload.readableStreamBody);
var DOMParser = require('xmldom').DOMParser;
let parser = new DOMParser();
let doc = parser.parseFromString(blob, 'text/html');
res.sendFile(doc.body);
}
}
);
Any help much appreciated.
I've just worked it out - it was simply the "res.sendFile" part, should have been "res.send".
The below is the correct working code to read the file from Azure Storage and serve it back to the app.
// GAIN ACCESS TO THE APPROPRIATE STORAGE ACCOUNT
const blobServiceClient = BlobServiceClient.fromConnectionString(
process.env.AZURE_STORAGE_CONNECTION_STRING
);
const containerClient = blobServiceClient.getContainerClient(
process.env.AZURE_STORAGE_CONTAINER
);
const blobClient = containerClient.getBlobClient(
process.env.AZURE_STORAGE_BLOB
);
// A FUNCTION TO HELP TURN THE STREAM INTO TEXT (TURN THIS INTO SOMETHING ELSE?)
async function streamToText(readable) {
readable.setEncoding('utf8');
let data = '';
for await (const chunk of readable) {
data += chunk;
}
return data;
};
// SERVE THE HTML FILE
app.get(
'/analytics/example', async (req, res) => {
const blobDownload = await blobClient.download(0);
const blob = await streamToText(blobDownload.readableStreamBody);
res.send(blob);
}
}
);
I have the data from what I believe is a png file on a variable like this:
console.log(name, file.value);
How do I properly save that to the filesystem? I've tried all imaginable combinations to save it but they all return something wrong, so the image cannot be opened:
import fsp from 'node:fs/promises';
await fsp.write(name, file.value, "binary");
await fsp.write(name, file.value, "utf8");
await fsp.write(name, Buffer.from([file.value], name), "utf8");
await fsp.write(name, Buffer.from([file.value], name), "binary");
// ...
Many of them create the file, but it seems it's wrong and cannot be opened. To get the data here, the original file lives on /logo.png so I'm doing on the front-end:
fetch("/logo.png")
.then((res) => res.blob())
.then((blob) => {
const form = new FormData();
form.append("hello", "world");
// ...
form.append("icon.png", new File([blob], "icon.png"));
Then to get the data on the front-end I'm doing:
const getBody = async (req) => {
return await new Promise((done) => {
const buffers = [];
req.on("data", (chunk) => {
buffers.push(chunk);
});
req.on("end", () => {
done(Buffer.concat(buffers).toString("utf8"));
});
});
};
const rawData = await getBody(req);
// Parse the rawData to get the fields etc
So, my two questions are:
What is this data/format called? Binary? can I convert this data format ("Binary?" "etc?
=> What I need to do :
I'm trying to loop over multiples tracks got from database
Getting audio file path/name of each track, and get the readstream from AWS then save it to a temporary repertory on my server
Make some changes on the audio with FFMPEG
ReUpload the changed audio file to AWS
Delete the file from the temporary repertory from my server
So first I got all the tracks, loop on it, and call the function with track parameter that make audio process on each track :
exports.updateAllTracksWithNewTag = async (req, res) => {
try {
const allTracks = await Tracks.findAll()
await Promise.all(
allTracks.map(async (track) => {
return await this.updateOne(track)
})
)
return res.status(200).json({ message: 'All tracks are done' })
} catch (error) {
return ResponseErrorFormated.responseError(error, res)
}
}
This is the function that make the audio process on each track :
exports.updateOne = async (track) => {
const filesToDeleteFromTemp = []
try {
const fileAudioURL = track.MP3_audio_url
const originalFileReadstream = await AWSS3Utils.getReadStream(
URLConfig.URL_ASSETS.PRIVATE.USERS_TRACKS_AUDIOS.path.slice(1) + fileAudioURL
)
const tempPathToSaveStreamGetted = 'assets/temp/audios/'
// It seem he just loop all the tracks to this line and then continue the process
console.log('SAVING FILE for track.id=', track.id) /////////////////////
await FilesUtils.saveReadStreamAsFile(originalFileReadstream, tempPathToSaveStreamGetted, fileAudioURL)
console.log('FILE SAVED for track.id=', track.id) /////////////////////
filesToDeleteFromTemp.push(tempPathToSaveStreamGetted + fileAudioURL)
const fileInfosForTag = {
path: tempPathToSaveStreamGetted + fileAudioURL,
filename: fileAudioURL,
}
console.log('CREATING TAGGED for track.id=', track.id) /////////////////////
const resultTaggedMP3 = await FilesFormater.createTaggedAudioFromMP3(fileInfosForTag)
console.log('TAGGED CREATED for track.id=', track.id) /////////////////////
const readStreamTaggedMP3 = resultTaggedMP3.readStream
const finalFilePathTaggedMP3 = resultTaggedMP3.finalFilePath
const finalFileNameTaggedMP3 = resultTaggedMP3.finalFileNameTaggedMP3
const newFileKeyMP3 =
URLConfig.URL_ASSETS.PUBLIC.USERS_TRACKS_TAGGED.path.slice(1) + finalFileNameTaggedMP3
filesToDeleteFromTemp.push(finalFilePathTaggedMP3)
await AWSS3Utils.uploadFileFromReadstream(readStreamTaggedMP3, newFileKeyMP3)
await FilesUtils.unlinkFiles(filesToDeleteFromTemp)
} catch (error) {
await FilesUtils.unlinkFiles(filesToDeleteFromTemp)
throw error
}
}
Excepted Result :
SAVING FILE for track.id=120
FILE SAVED for track.id=120
CREATING TAGGED for track.id=120
TAGGED CREATED for track.id=120
SAVING FILE for track.id=121
FILE SAVED for track.id=121
CREATING TAGGED for track.id=121
TAGGED CREATED for track.id=121
The real result :
SAVING FILE for track.id=120
SAVING FILE for track.id=121
SAVING FILE for track.id=122
SAVING FILE for track.id=123
I am looking for a way to download binary pdf file, which I get through api. I have this file in local app file system, but I would like to download it on my phone.
const pdf = Buffer.from(res.data, "binary").toString("base64");
const fileUri =
FileSystem.documentDirectory + `${encodeURI("generated")}.pdf`;
FileSystem.writeAsStringAsync(fileUri, pdf, {
encoding: FileSystem.EncodingType.Base64
}).then((respond) => {
downloadPdf(fileUri);
//Sharing.shareAsync(fileUri); // Here I can share file by other apps
});
const downloadPdf = async (uri) => {
//Linking.openURL(uri) // #approach 1
// MediaLibrary.saveToLibraryAsync(uri).then(res => { // #approach 2
// console.log(res)
// }).catch(err => {
// console.log(err)
// })
const permissions = await MediaLibrary.getPermissionsAsync();
try {
const asset = await MediaLibrary.createAssetAsync(uri); // #approach 3
const album = await MediaLibrary.getAlbumAsync("Downloads");
if (album == null) {
await MediaLibrary.createAlbumAsync("Downloads", asset, false);
} else {
await MediaLibrary.addAssetsToAlbumAsync([asset], album, false);
}
} catch (e) {
console.log(e)
}
};
I have tried different ways to do it, but I've stopped here with expo-media-library, which gives me:
"Unable to copy file into external storage" error.
Is it a good direction to use it ? Maybe you have some better solutions?
i have a bunch of VHD files stored on a private Server, which are accessible through a url.
I am trying upload these vhd files directly to my azure storage account using the azure javascript npm libraries. The vhds have to be uploaded as page-blobs. I tried using the method uploadPagesFromURL() of the pageblobClient but with no success. My code looks roughly like this:
async function uploadVHD(accessToken, srcUrl)
{
try {
// Get credentials from accessToken
const creds = new StorageSharedKeyCredential(storageAccount.name, storageAccount.key);
// Get blobServiceClient
const blobServiceClient = new BlobServiceClient(`https://${storageAccount.name}.blob.core.windows.net`, creds);
// Create Container
const containerClient = blobServiceClient.getContainerClient("vhd-images");
await containerClient.createIfNotExists();
const src = srcUrl.replace('https://', 'https://username:password#');
// Upload to blob storage
const pageBlobClient = containerClient.getPageBlobClient("Test.vhd");
// Get fileSize of vhd
const fileSize = (await axiosRequest(src, { method: "HEAD" })).headers["content-length"];
const uploadResponse = await pageBlobClient.uploadPagesFromURL(src, 0, 0, fileSize);
return uploadResponse;
} catch (error) {
return error;
}
});
It is not possible to upload the Page Blob with your URL directly. You need to read data from the url. Then upload using uploadPages method.
axios.get(URL, {
responseType: 'arraybuffer'
})
.then((response) => {
console.log(response.data)
console.log(response.data.length)
// upload page blob...
}).catch((error) => {
//handle error
});
// uploadPages method
const uploadResponse = pageBlobClient.uploadPages(data, 0, dataLength);