I try to retrieve all images within a google drive folder using their API. For now, I get all images as binary using this request:
const baseurl = "https://www.googleapis.com/drive/v3/files"
const query = "'XXX'+in+parents"
const token = "YYY"
fetch(`${baseurl}?q=${query}&key=${token}&fields=files(id)`).then(...)
And I get sth like:
But I can't figure out how to turn this into a <img ...>. I tried with btoa, it throws an exeption (that the string contains errors). I tried to transform the string to base64, but the final image is not valid. Any idea ?
I changed the approach. Instead of downloading the file and trying to decode it, I just extract a downloadable link from the response (webContentLink) and fill the src attribute of my <img ...>:
const baseurl = "https://www.googleapis.com/drive/v3/files"
const query = "'XXX'+in+parents"
const token = "YYY"
fetch(`${baseurl}?q=${q}&key=${key}&fields=files(webContentLink)`)
.then((data: any) => data.json())
.then((data: any) => data.files.map((f: any) => f.webContentLink))
...
Related
I am trying to download an excel file and then upload it to Azure Blob Storage for use in Azure Data Factory. I have a playwright javascript that worked when the file was a .csv but when I try the same code with an excel file, it will not open in Excel. It says,
"We found a problem with some content in 'order_copy.xlsx'. Do you want us to try to recover as much as we can?:
After clicking yes, it says,
"Excel cannot open the file 'order_copy.xlsx' because the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file."
Any ideas on how to use the createReadStream more effectively to do this and preserve the .xlsx format?
I don't think the saveAs method will work since this code is being executed in an Azure Function with no access to a local known path.
My first thought was the content type was not right, so I set that, but it still did not work. I tried a UTF-8 encoder but that also did not work.
//const data = await streamToText(download_csv.createReadStream())
const download_reader = await download_csv.createReadStream();
let data = '';
for await (const chunk of download_reader) {
data += chunk; //---I suspect I need to do something different here
}
// const data_utf8 = utf8.encode(data) //unescape( encodeURIComponent(data) );
const AZURE_STORAGE_CONNECTION_STRING = "..." //---Removed string here
// Create the BlobServiceClient object which will be used to create a container client
const blob_service_client = BlobServiceClient.fromConnectionString(AZURE_STORAGE_CONNECTION_STRING);
// Get a reference to a container
const container_client = blob_service_client.getContainerClient('order');
const blob_name = 'order_copy.xlsx';
// Get a block blob client
const block_blob_client = container_client.getBlockBlobClient(blob_name);
const contentType = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
const blobOptions = { blobHTTPHeaders: { blobContentType: contentType } };
//const uploadBlobResponse = await block_blob_client.upload(data_utf8, data_utf8.length, blobOptions);
const uploadBlobResponse = await block_blob_client.upload(data, data.length, blobOptions);
console.log("Blob was uploaded successfully. requestId: ", uploadBlobResponse.requestId);
Any guidance would be appreciated. Thank you in advance for your help!
-Chad
Thanks #Gaurav for the suggestion on not setting the data to a string. The following code worked after I changed to using a array of the chunks and concatenated it using the Buffer similar to your suggested code.
let chunks = []
for await (const chunk of download_reader) {
chunks.push(chunk)
}
const fileBuffer = Buffer.concat(chunks)
...
const uploadBlobResponse = await block_blob_client.upload(fileBuffer, fileBuffer.length, blobOptions);
Thanks everyone!
Some general context: This is an app that uses the MERN stack, but the question is more specific to AWS S3 data.
I have an S3 set up and i store images and files from my app there. I usually generate signedURLs with the server and do a direct upload from the browser.
within my app db i store the object URIs as a string and then an image for example i can render with an <img/> tag no problem. So far so good.
However, when they are PDFs and i want to let the user download the PDF i stored in S3, doing an <a href={s3Uri} download> just causes the pdf to be opened in another window/tab instead of prompting the user to download. I believe this is due to the download attribute being dependent on same-origin and you cannot download a file from an external resource (correct me if im wrong please)
So then my next attempt is to then do an http fetch of the resource directly using axios, it looks something like this
axios.create({
baseURL: attachment.fileUrl,
headers: {common: {Authorization: ''}}
})
.get('')
.then(res => {
console.log(res)
console.log(typeof res.data)
console.log(new Buffer.from(res.data).toString())
})
So by doing this I am successfully reading the response headers (useful cuz then i can handle images/files differently) BUT when i try to read the binary data returned i have been unsuccessful and parsing it or even determining how it is encoded, it looks like this
%PDF-1.3
3 0 obj
<</Type /Page
/Parent 1 0 R
/Resources 2 0 R
/Contents 4 0 R>>
endobj
4 0 obj
<</Filter /FlateDecode /Length 1811>>
stream
x�X�R�=k=E�������Na˅��/���� �[�]��.�,��^ �wF0�.��Ie�0�o��ݧO_IoG����p��4�BJI���g��d|��H�$�12(R*oB��:%먺�����:�R�Ф6�Xɔ�[:�[��h�(�MQ���>���;l[[��VN�hK/][�!�mJC
.... and so on
I have another function I use to allow users to download PDFs that i store directly in my database as strings in base64. These are PDF's my app generates and are fairly small so i store them directly in the DB, as opposed to the ones i store in AWS S3 which are user-submitted and can be several MBs in size (the ones in my db are just a few KB)
The function I use to process my base64 pdfs and provide a downloadable link to the users looks like this
export const makePdfUrlFromBase64 = (base64) => {
const binaryImg = atob(base64);
const binaryImgLength = binaryImg.length;
const arrayBuffer = new ArrayBuffer(binaryImgLength);
const uInt8Array = new Uint8Array(arrayBuffer);
for (let i = 0; i < binaryImgLength; i++) {
uInt8Array[i] = binaryImg.charCodeAt(i);
}
const outputBlob = new Blob([uInt8Array], {type: 'application/pdf'});
return URL.createObjectURL(outputBlob)
}
HOWEVER, when i try to apply this function to the data returned from AWS i get this error:
DOMException: Failed to execute 'atob' on 'Window': The string to be decoded contains characters outside of the Latin1 range.
So what kind of binary data encoding do i have here from AWS?
Note: I am able to render an image with this binary data by passing the src in the img tag like this:
<img src={data:${res.headers['Content-Type']};base64,${res.data}} />
which is my biggest hint that this is some form of base64?
PLEASE! If anyone has a clue how i can achieve my goal here, im all ears! The goal is to be able to prompt the user to download the resource which i have in an S3 URI. I can link to it and they can open it in browser, and then download manually, but i want to force the prompt.
Anybody know what kind of data is being returned here? any way to parse it as a stream? a buffer?
I have tried to stringify it with JSON or to log it to the console as a string, im open to all suggestions at this point
You're doing all kinds of unneeded conversions. When you do the GET request, you already have the data in the desired format.
const response = await fetch(attachment.fileUrl,
headers: {Authorization: ''}}
});
const blob = await response.blob();
return URL.createObjectURL(res.data);
Like is it possible to do something like !image add [image file] and then add the attachment to a folder? I think i can do that with fs, but i'm not sure how
You can use the fs function fs.writeFile() or fs.writeFileSync(). This function accepts the absolute path to a file to write to, and the data to write. In your case, it should be a buffer or stream.
// const fs = require('fs');
fs.writeFileSync('./some_dir/some_file_name.extension', data);
To get the data in question, you should access Message#attachments(), a collection of all attachments on the message. Assuming you only want the first, you can use Collection#first() to narrow down the results.
const attachment = message.attachments.first();
if (!attachment) {
// maybe place in some error handling
}
Unfortunately, the MessageAttachment class doesn't actually hold a buffer/stream representing the attachment, only the URL leading to it. This means you'll need a third-party library such as axios or node-fetch.
// const fetch = require('node-fetch');
fetch(attachment.url)
.then(res => res.buffer())
.then(buffer => {
fs.writeFileSync(`./images/${attachment.name}`, buffer);
});
Make sure to validate that URL to make sure it's an image!
if(!/\.(png|jpe?g|svg)$/.test(attachment.url)) {
// this attachment isn't an image!
// we don't want to be downloading .exe files now, do we?
}
Finally, you should also be weary that if two files are named the same, such as image.png, trying to write the second one will overwrite the first. One way to overcome that issue is to add numerical suffixes to duplicates, such as image.png, image-1.png, image-2.png, etc. That could work out a little like this:
fetch(attachment.url)
.then(res => res.buffer())
.then(buffer => {
let path = `./images/${attachment.name}`;
// increment the suffix every iteration until a file
// by the same name cannot be found
for (let count = 1; fs.existsSync(path); count++) {
path = `./images/${attachment.name}-${count}`;
}
fs.writeFileSync(path, buffer);
});
I have a system where new clients make account requests. They create their profile and they upload their profile picture in a temporary path.
When I accept the request manually, I would like to move the file from requests/{requestID}/profileImage.jpg to userFiles/{userID}/profileImage.jpg with a cloud function
I searched and it seems it's not possible to cut and paste a file. However, I guess it's possible to copy the image and the delete the old one.
I know how to delete a file, but I don't know how to take the temporary file and copy it to a new destination. Do you have any idea?
From the URL?
Do I have to convert the image URL to base64 in order to re-upload it?
My bad, it was possible, and it's pretty easy. This is the doc https://cloud.google.com/storage/docs/copying-renaming-moving-objects#storage-rename-object-nodejs, and here is the code:
async function moveFile(oldEntirePath, newEntirePath) {
var bucket = admin.storage().bucket();
return bucket.file(oldEntirePath).move(newEntirePath)
.then(resp => {
let bucketPart = resp[1].resource.bucket
let namePart = resp[0].id
let tokenPart = resp[1].resource.metadata.firebaseStorageDownloadTokens
const url = "https://firebasestorage.googleapis.com/v0/b/"+ bucketPart +"/o/"+ namePart +"?alt=media&token=" + tokenPart
console.log(`Image url = ${url}`)
return url
})
.catch(err => {
console.log(`Unable to upload image ${err}`)
})
}
I'm building an application that will allow me to take a picture from my react app which accesses the web cam, then I need to upload the image to google cloud storage using a Hapi node.js server. The problem I'm encountering is that the react app snaps a picture and gives me this blob string (I actually don't even know if that's what it's called) But the string is very large and looks like this (I've shortened it due to it's really large size:
"imageBlob": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/...
I'm finding it hard to find resources that show me how to do this exactly, I need to upload that blob file and save it to a google cloud storage bucket.
I have this in my app so-far:
Item.postImageToStorage = async (request, h) => {
const image = request.payload.imageBlob;
const projectId = 'my-project-id'
const keyFilename = 'path-to-my-file'
const gc = new Storage({
projectId: projectId,
keyFilename: keyFilename
})
const bucket = gc.bucket('my-bucket.appspot.com/securityCam');
const blob = bucket.file(image);
const blobStream = blob.createWriteStream();
blobStream.on('error', err => {
h.response({
success: false,
error: err.message || '=-->' + err
})
});
console.log('===---> ', 'no errors::::')
blobStream.on('finish', () => {
console.log('done::::::', `https://storage.googleapis.com/${bucket.name}/${blob.name}`)
// The public URL can be used to directly access the file via HTTP.
const publicUrl = format(
`https://storage.googleapis.com/${bucket.name}/${blob.name}`
);
});
console.log('===---> ', 'past finish::::')
blobStream.end(image);
console.log('===---> ', 'at end::::')
return h.response({
success: true,
})
// Utils.postRequestor(path, payload, headers, timeout)
}
I ge to the success message/response h.response but no console logs appear except the ones outside of the blobStream.on I see all that start with ===---> but nothing else.
Not sure what I'm doing wrong, thanks in advance!
At the highest level, let us assume you want to write into file my-file.dat that is to live in bucket my-bucket/my-folder. Let us assume that the data you want to write is a binary chunk of data that is stored in a JavaScript Buffer object referenced by a variable called my_data. We would then want to code something similar to :
const bucket = gc.bucket('my-bucket/my-folder');
const my_file = bucket.file('my-file.dat');
const my_stream = my_file.createWriteStream();
my_stream.write(my_data);
my_stream.end();
In your example, something looks fishy with the value you are passing in as the file name in the line:
const blob = bucket.file(image);
I'm almost imagining you are thinking you are passing in the content of the file rather than the name of the file.
Also realize that your JavaScript object field called "imageBlob" will be a String. It may be that it indeed what you want to save but I can also imagine that what you want to save is binary data corresponding to your webcam image. In which case you will have to decode the string to a binary Buffer. This looks like it will be extracting the string data starting data:image/jpeg;base64, and then creating a Buffer from that by treating the string as Base64 encoded binary.
Edit: fixed typo