Downloading Image locally from GitHub Raw link using fs.writeFileSync() JS - javascript

Currently trying to download image from GitHub locally. Everything seems to work, the fetch goes through with a 200 OK response, however, I don't understand how to store image itself:
const rawGitLink = "https://raw.githubusercontent.com/cardano-foundation/CIPs/master/CIP-0001/CIP_Flow.png"
const folder = "/Folder"
const imageName = "/Test"
const imageResponse = await axios.get(rawGitLink)
fs.writeFileSync(___dirname + folder + imageName, imageResponse, (err) => {
//Error handling
}
)

Four problems had to be fixed:
Image name must include png format for this case
The response must be in the correct format as a buffer for an image
You must write the response data and not the object itself
__dirname only needs two underscores
const rawGitLink = "https://raw.githubusercontent.com/cardano-foundation/CIPs/master/CIP-0001/CIP_Flow.png"
const folder = "/Folder"
const imageName = "/Test.png"
const imageResponse = await axios.get(rawGitLink, { responseType: 'arraybuffer' });
fs.writeFileSync(__dirname + folder + imageName, imageResponse.data)

Axios returns a special object: https://github.com/axios/axios#response-schema
let {data} = await axios.get(...)
await fs.writeFile(filename, data) // you can use fs.promises instead of sync
As #Leau said you should include the extension on the filename
Another sugestion is to use the path module to create the filename:
filename = path.join(__dirname, "/Folder", "Test.png")

Related

How to read a large csv as a stream

I am using the #aws-sdk/client-s3 to read a json file from S3, take the contents and dump it into dynamodb. This all currently works fine using:
const data = await (await new S3Client(region).send(new GetObjectCommand(bucketParams)));
And then deserialising the response body etc.
However, I'm looking to migrate to use jsonlines format, effectiely csv, in the sense it needs to be streamed in line by line or in chunks of lines and processed. I can't seem to find a way of doing this that doesnt load the entire file into memory (using response.text() etc).
Ideally, I would like to pipe the response into a createReadStream, and go from there.
I found this example with createReadStream() form module fs in node.js:
import fs from 'fs';
function read() {
let data = '';
const readStream = fs.createReadStream('business_data.csv', 'utf-8');
readStream.on('error', (error) => console.log(error.message));
readStream.on('data', (chunk) => data += chunk);
readStream.on('end', () => console.log('Reading complete'));
};
read();
You can modify it for your use. Hope this helps.
Connection to your S3 you can do by:
var s3 = new AWS.S3({apiVersion: '2006-03-01'});
var params = {Bucket: 'myBucket', Key: 'myImageFile.jpg'};
var file = require('fs').createWriteStream('/path/to/file.jpg');
s3.getObject(params).createReadStream().pipe(file);
see here

Node JS with Axios. How to get extension of the image from url

I am trying to download the image and save it in my server from the url address. So for example I make a POST request with URL of the image. I download the image and I save it in my server. The problem comes when I need to figure our the extension of the image. Right now it works staticaly only for jpg files, but it should work for png aswell. How can I find out the extension of the file before saving it?
One way would be to get the extension from the url itself, but not all urls will have the extension , for example: https://media.istockphoto.com/photos/winter-in-the-sequoias-picture-id1292624259
This is the code that I have made right now. It works, however how I said, its static and only working for jpg:
var config = {
responseType: 'stream'
};
async function getImage(url) {
let time = Math.floor(Date.now() / 1000)
let resp = await axios.get(url, config)
resp.data.pipe(fs.createWriteStream(time+'.jpg')) // here I need to get the image extension isntead of static '.jpg'
}
You can use response headers for that. The Content-Type header should tell you the type of the file and with Content-Disposition you can get the filename with extension.
In your code you can access these headers like this
resp.headers['content-type'];
resp.headers['content-disposition'];
I'd suggest using a module such as mime to get the extension from the content-type.
Complete example:
const axios = require('axios');
const fs = require('fs');
const mime = require('mime');
var config = {
responseType: 'stream'
};
async function getImage(url) {
let time = Math.floor(Date.now() / 1000)
let resp = await axios.get(url, config)
const contentLength = resp.headers['content-length'];
const contentType = resp.headers['content-type'];
const extension = mime.extension(contentType);
console.log(`Content type: ${contentType}`);
console.log(`Extension: ${extension}`);
const fileName = time + "." + extension;
console.log(`Writing ${contentLength} bytes to file ${fileName}`);
resp.data.pipe(fs.createWriteStream(fileName));
}
const url = 'https://media.istockphoto.com/photos/winter-in-the-sequoias-picture-id1292624259';
getImage(url)
This will give an output somewhat like:
Content type: image/jpeg
Extension: jpeg
Writing 544353 bytes to file 1638867349.jpeg

Javascript - Open PDF blob in browser with a nice looking url

I am using Node to grab a PDF from the server and send it to my React frontend. Then I want to display that PDF in the browser in a new tab. It's working fairly well, except the URL of the new tab with the PDF is not ideal. The URL of the new tab looks like: blob:http://localhost:3000/71659 but I would like it to look like http://localhost:3000/71659.pdf. No 'blob' and with a pdf extension like when I would click on a pdf on the web just like the examples here: https://file-examples.com/index.php/sample-documents-download/sample-pdf-download/
My current code that handles the saving of the blob and opening it is this:
.then((res) => {
console.log(res);
const file = new Blob([res.data], {
type: 'application/pdf'
});
//Build a URL from the file
const fileURL = URL.createObjectURL(file);
window.open(fileURL, '_blank');
});
And this is my Node route the sends the stream:
router.get('/openPDFFile', async (req, res) => {
console.log('we got to the server!! with: ', req.query);
const pdfFilename = req.query.pdf;
const pdfFilepath = `./path/to/pdf/${pdfFilename}`;
router.get();
const src = fs.createReadStream(pdfFilepath);
res.writeHead(200, {
'Content-Type': 'application/pdf',
'Content-Disposition': 'inline; filename=sample.pdf'
});
src.pipe(res);
});
Now I'm wondering if instead of sending the stream over the wire and converting it to a blob, if I can just simply create a route to that PDF from Node. Something like /PDF/${pdfFilename}. And then my React will just open that URL in a new tab?
Update - Here is my latest Node route based on x00's answer:
router.get('/openPDFFile', async (req, res) => {
console.log('we got to the server!! with: ', req.query);
const pretty_PDF_name = req.query.pdf;
const pdfFilename = (await SDS.getPDFFileName({ pretty_PDF_name }))
.dataValues.sheet_file_name;
console.log('pdfFilename: ', pdfFilename);
const cleanPDFName =
pretty_PDF_name
.substring(0, pretty_PDF_name.length - 4)
.replace(/[ ,.]/g, '') + '.pdf';
const pdfFilepath = '\\path\\to\\file\\' + pdfFilename;
const fullFilePath = path.join(__dirname + '/../' + pdfFilepath);
console.log(cleanPDFName, fullFilePath);
router.get('/pdf/' + cleanPDFName, async (req, res) => {
res.sendFile(fullFilePath);
});
// router.get('/pdf/' + cleanPDFName, express.static(fullFilePath));
// const src = fs.createReadStream(pdfFilepath);
//
// res.writeHead(200, {
// 'Content-Type': 'application/pdf',
// 'Content-Disposition': 'inline; filename=sample.pdf'
// });
//
// src.pipe(res);
// return res.json({ fileuploaded: cleanPDFName });
});
I had seen the express.static way as well and was trying that too.
As I understood from the comments you don't have any special requirements (at least you didn't mention any when answering my comment). So you can just do this:
client
window.open(`/pdf/${pdfFilepath}`, '_blank');
// no need for
// fetch('/openPDFFile', ... pdf: pdfFilepath ... })
// .then(res => ... Blob ... )
// or whatever you where doing
server
router.get('/pdf/:pdfFilename', async (req, res) => {
...
res.sendFile(__dirname + `/path/to/pdf/${req.params.pdfFilename}`)
})
As a result you'll get url in the form of http://localhost:3000/pdf/71659.pdf. Also you can get the url without /pdf/ part, but I don't see any reason for that.
Update
About the colon: see "Route parameters" section here https://expressjs.com/en/guide/routing.html
Full working example:
<!DOCTYPE html>
<html lang="en">
<head></head>
<body>
<div id="get_pdf">Get PDF</div>
</body>
<script>
// here can be your business logic
// for example the name of pdf can be entered by the user through <input>
const pdfFile = "1"
// click will open new window with url = `http://localhost:3000/pdf/1.pdf`
document
.getElementById("get_pdf")
.addEventListener("click", () => {
window.open(`http://localhost:3000/pdf/${pdfFile}.pdf`, '_blank')
// if you want the ".pdf" extension on the url - you must add it yourself
})
</script>
</html>
const express = require('express')
const app = express()
app.get('/pdf/:pdf', async (req, res) => {
const requested_pdf = req.params.pdf // === "1.pdf"
console.log(requested_pdf)
// here can be your business logic for mapping
// requested_pdf from request to filepath of pdf
// or maybe even to generated pdf with no file underneath
// but I'll simply map to some static path
const map_to_pdf_path = name => __dirname + `/path/to/pdf/${name}`
res.sendFile(map_to_pdf_path(requested_pdf))
})
const listener = app.listen(process.env.PORT || constants.server_port, err => {
if (err) return console.error(err)
console.log(`Find the server at: http://localhost:${listener.address().port}`)
})
You can get a pretty filename if you hijack a bit of DOM for your purposes as indicated in this older solution, but you'll hit a number of issues in different browsers. The FileSaver.js project is probably your best bet for a near-universal support for what you're trying to accomplish. It handles blob downloads with names in a cross-browser way, and even offers some fallback options if you need IE <10 support.
This is a EASY way to do it... By no means am I trying to say that this is a good way to do it; however.
You can change the name of the URL after it has loaded using:
window.history.pushState("","","/71659.pdf");
Assuming that you can load the pdf by already going to that url, this is all you would have to do. (you wouldn't want people who are sharing that url to be sharing a broken url) Otherwise, you would need to make a new route that will accept your desired url.
If you want to you could add some error checking to see if the loaded url is the one that you want to change using:
window.location.href

NodeJS + ldapsj-client: problem saving thumbnailPhoto

Using the ldapsj-client module, I'm trying to save the thumbnailPhoto into a file
const auth = async () => {
const client = new LdapClient({ url: 'myaddomain' })
await client.bind('someemail#domain.com.br', 'passwaord')
const opts = {
filter: `(sAMAccountName=credential)`,
scope: "sub"
}
const s = await client.search(myBaseDN, opts)
console.log('thumbnailPhoto', s[0].thumbnailPhoto)
}
The console.log() outputs something like '����JFIF��C...'
I cannot figure out how to save this binary into a file. When I try several approaches, as explained here, does not work. It seems the data from AD is not in the same "format".
I tried to convert it into a Buffer and then, to base64
const buffer = Buffer.from(s[0].thumbnailPhoto, 'binary')
var src = "data:image/png;base64," + Buffer.from(s[0].thumbnailPhoto).toString('base64')
But the output is not a valid base64.

Multiple file stream instead of download to disk and then zip?

I have an API method that when called and passed an array of file keys, downloads them from S3. I'd like to stream them, rather than download to disk, followed by zipping the files and returning that to the client.
This is what my current code looks like:
reports.get('/xxx/:filenames ', async (req, res) => {
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var str_array = filenames.split(',');
for (var i = 0; i < str_array.length; i++) {
var filename = str_array[i].trim();
localFileName = './' + filename;
var params = {
Bucket: config.reportBucket,
Key: filename
}
s3.getObject(params, (err, data) => {
if (err) console.error(err)
var file = require('fs').createWriteStream(localFileName);
s3.getObject(params).createReadStream().pipe(file);
console.log(file);
})
}
});
How would I stream the files rather than downloading them to disk and how would I zip them to return that to the client?
Main problem is to zip multiple files.
More specifically, download them from AWS S3 in bulk.
I've searched through AWS SDK and didn't find bulk s3 operations.
Which brings us to one possible solution:
Load files one by one and store them to folder
Zip folder (with some package like this)
Send zipped folder
This is raw and untested example, but it might give you the idea:
// Always import packages at the beginning of the file.
const AWS = require('aws-sdk');
const fs = require('fs');
const zipFolder = require('zip-folder');
const s3 = new AWS.S3();
reports.get('/xxx/:filenames ', async (req, res) => {
const filesArray = filenames.split(',');
for (const fileName of filesArray) {
const localFileName = './' + filename.trim();
const params = {
Bucket: config.reportBucket,
Key: filename
}
// Probably you'll need here some Promise logic, to handle stream operation end.
const fileStream = fs.createWriteStream(localFileName);
s3.getObject(params).createReadStream().pipe(fileStream);
}
// After that all required files would be in some target folder.
// Now you need to compress the folder and send it back to user.
// We cover callback function in promise, to make code looks "sync" way.
await new Promise(resolve => zipFolder('/path/to/the/folder', '/path/to/archive.zip', (err) => {resolve()});
// And now you can send zipped folder to user (also using streams).
fs.createReadStream('/path/to/archive.zip').pipe(res);
});
Info about streams link and link
Attention: You'll probably could have some problems with async behaviour, according to streams nature, so, please, first of all, check if all files are stored in folder before zipping.
Just a mention, I've not tested this code. So if any questions appear, let's debug together

Categories

Resources