When I upload plain text to firebase storage, the result that ultimately gets uploaded is not the plain text that I submitted, but instead, a string of numbers that correspond to ASCII characters.
I followed the example from the docs almost exactly, here is my code:
const storageRef = firebase.storage().ref("example.txt");
const message = "This is my message.";
storageRef.putString(message);
However, when I check the file on the server, I find that it contains the following:
84,104,105,115,32,105,115,32,109,121,32,109,101,115,115,97,103,101,46
I have also tried this code:
const storageRef = firebase.storage().ref("example.txt");
const message = "This is my message.";
storageRef.putString(message, "raw", { contentType: "text/plain" });
But it doesn't make a difference.
I am running this from a react native expo managed workflow, using the firebase js sdk (I just ran expo install firebase).
Can anyone tell me why my data is getting converted to ASCII, how I can prevent that from happening, or how I can otherwise resolve this?
Pretty sure this is the result of a Firebase bug, but I found a workaround by creating a blob and using "put" instead of "putString"
E.g.
const obj = {hello: 'world'};
const blob = new Blob([JSON.stringify(obj)], {type : 'application/json'});
const storageRef = firebase.storage().ref("example.json");
storageRef.put(blob)
Still having a little trouble figuring out how to do this for images though...
Related
I am trying to download an excel file and then upload it to Azure Blob Storage for use in Azure Data Factory. I have a playwright javascript that worked when the file was a .csv but when I try the same code with an excel file, it will not open in Excel. It says,
"We found a problem with some content in 'order_copy.xlsx'. Do you want us to try to recover as much as we can?:
After clicking yes, it says,
"Excel cannot open the file 'order_copy.xlsx' because the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file."
Any ideas on how to use the createReadStream more effectively to do this and preserve the .xlsx format?
I don't think the saveAs method will work since this code is being executed in an Azure Function with no access to a local known path.
My first thought was the content type was not right, so I set that, but it still did not work. I tried a UTF-8 encoder but that also did not work.
//const data = await streamToText(download_csv.createReadStream())
const download_reader = await download_csv.createReadStream();
let data = '';
for await (const chunk of download_reader) {
data += chunk; //---I suspect I need to do something different here
}
// const data_utf8 = utf8.encode(data) //unescape( encodeURIComponent(data) );
const AZURE_STORAGE_CONNECTION_STRING = "..." //---Removed string here
// Create the BlobServiceClient object which will be used to create a container client
const blob_service_client = BlobServiceClient.fromConnectionString(AZURE_STORAGE_CONNECTION_STRING);
// Get a reference to a container
const container_client = blob_service_client.getContainerClient('order');
const blob_name = 'order_copy.xlsx';
// Get a block blob client
const block_blob_client = container_client.getBlockBlobClient(blob_name);
const contentType = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
const blobOptions = { blobHTTPHeaders: { blobContentType: contentType } };
//const uploadBlobResponse = await block_blob_client.upload(data_utf8, data_utf8.length, blobOptions);
const uploadBlobResponse = await block_blob_client.upload(data, data.length, blobOptions);
console.log("Blob was uploaded successfully. requestId: ", uploadBlobResponse.requestId);
Any guidance would be appreciated. Thank you in advance for your help!
-Chad
Thanks #Gaurav for the suggestion on not setting the data to a string. The following code worked after I changed to using a array of the chunks and concatenated it using the Buffer similar to your suggested code.
let chunks = []
for await (const chunk of download_reader) {
chunks.push(chunk)
}
const fileBuffer = Buffer.concat(chunks)
...
const uploadBlobResponse = await block_blob_client.upload(fileBuffer, fileBuffer.length, blobOptions);
Thanks everyone!
I'm currently creating a real-time chat application. This is a web application that uses node.js for the backend and uses socket.io to connect back and forth.
Currently, I'm working on creating user profiles with profile pictures. These profile pictures will be stored in a folder called images/profiles/. The file will be named by the user's id. For example: user with the id 1 will have their profile pictures stored in images/profiles/1.png. Very self-explanatory.
When the user submits the form to change their profile picture, the browser JavaScript will get the image, and send it to the server:
form.addEventListener('submit', handleForm)
function handleForm(event) {
event.preventDefault(); // stop page from reloading
let profilePicture; // set variable for profile picture
let profilePictureInput = document.getElementById('profilePictureInput'); // get image input
const files = profilePictureInput.files[0]; // get input's files
if (files) {
const fileReader = new FileReader(); // initialize file reader
fileReader.readAsDataURL(files);
fileReader.onload = function () {
profilePicture = this.result; // put result into variable
socket.emit("request-name", {
profilePicture: profilePicture,
id: userID,
}); // send result, along with user id, to server
}
}
I've commented most of the code so it's easy to follow. The server then gets this information. With this information, the server is supposed to convert the sent image to a png format (I can do whatever format, but it has to be the same format for all images). I am currently using the jimp library to do this task, but it doesn't seem to work.
const jimp = require('jimp'); // initialize Jimp
socket.on('request-name', (data) => { // when request has been received
// read the buffer from image (I'm not 100% sure what Buffer.from() does, but I saw this online)
jimp.read(Buffer.from(data.profilePicture), function (error, image) {
if (error) throw error; // throw error if there is one
image.write(`images/profiles/${data.id}.png`); // write image to designated place
}
});
The error I get:
Error: Could not find MIME for Buffer <null>
I've scoured the internet for answers but was unable to find any. I am available to use another library if this helps. I can also change the file format (.png to .jpg or .jpeg, if needed; it just needs to be consistent with all files). The only things I cannot change are the use of JavaScript/Node.js and socket.io to send the information to the server.
Thank you in advance. Any and all help is appreciated.
If you're just getting the data URI as a string, then you can construct a buffer with it and then use the built in fs to write the file. Make sure the relative path is accurate.
socket.on('request-name', data => {
const imgBuffer = Buffer.from(data.profilePicture, 'base64');
fs.writeFile(`images/profiles/${data.id}.png`, imgBuffer);
}
Some general context: This is an app that uses the MERN stack, but the question is more specific to AWS S3 data.
I have an S3 set up and i store images and files from my app there. I usually generate signedURLs with the server and do a direct upload from the browser.
within my app db i store the object URIs as a string and then an image for example i can render with an <img/> tag no problem. So far so good.
However, when they are PDFs and i want to let the user download the PDF i stored in S3, doing an <a href={s3Uri} download> just causes the pdf to be opened in another window/tab instead of prompting the user to download. I believe this is due to the download attribute being dependent on same-origin and you cannot download a file from an external resource (correct me if im wrong please)
So then my next attempt is to then do an http fetch of the resource directly using axios, it looks something like this
axios.create({
baseURL: attachment.fileUrl,
headers: {common: {Authorization: ''}}
})
.get('')
.then(res => {
console.log(res)
console.log(typeof res.data)
console.log(new Buffer.from(res.data).toString())
})
So by doing this I am successfully reading the response headers (useful cuz then i can handle images/files differently) BUT when i try to read the binary data returned i have been unsuccessful and parsing it or even determining how it is encoded, it looks like this
%PDF-1.3
3 0 obj
<</Type /Page
/Parent 1 0 R
/Resources 2 0 R
/Contents 4 0 R>>
endobj
4 0 obj
<</Filter /FlateDecode /Length 1811>>
stream
x�X�R�=k=E�������Na˅��/���� �[�]��.�,��^ �wF0�.��Ie�0�o��ݧO_IoG����p��4�BJI���g��d|��H�$�12(R*oB��:%먺�����:�R�Ф6�Xɔ�[:�[��h�(�MQ���>���;l[[��VN�hK/][�!�mJC
.... and so on
I have another function I use to allow users to download PDFs that i store directly in my database as strings in base64. These are PDF's my app generates and are fairly small so i store them directly in the DB, as opposed to the ones i store in AWS S3 which are user-submitted and can be several MBs in size (the ones in my db are just a few KB)
The function I use to process my base64 pdfs and provide a downloadable link to the users looks like this
export const makePdfUrlFromBase64 = (base64) => {
const binaryImg = atob(base64);
const binaryImgLength = binaryImg.length;
const arrayBuffer = new ArrayBuffer(binaryImgLength);
const uInt8Array = new Uint8Array(arrayBuffer);
for (let i = 0; i < binaryImgLength; i++) {
uInt8Array[i] = binaryImg.charCodeAt(i);
}
const outputBlob = new Blob([uInt8Array], {type: 'application/pdf'});
return URL.createObjectURL(outputBlob)
}
HOWEVER, when i try to apply this function to the data returned from AWS i get this error:
DOMException: Failed to execute 'atob' on 'Window': The string to be decoded contains characters outside of the Latin1 range.
So what kind of binary data encoding do i have here from AWS?
Note: I am able to render an image with this binary data by passing the src in the img tag like this:
<img src={data:${res.headers['Content-Type']};base64,${res.data}} />
which is my biggest hint that this is some form of base64?
PLEASE! If anyone has a clue how i can achieve my goal here, im all ears! The goal is to be able to prompt the user to download the resource which i have in an S3 URI. I can link to it and they can open it in browser, and then download manually, but i want to force the prompt.
Anybody know what kind of data is being returned here? any way to parse it as a stream? a buffer?
I have tried to stringify it with JSON or to log it to the console as a string, im open to all suggestions at this point
You're doing all kinds of unneeded conversions. When you do the GET request, you already have the data in the desired format.
const response = await fetch(attachment.fileUrl,
headers: {Authorization: ''}}
});
const blob = await response.blob();
return URL.createObjectURL(res.data);
I'm building an application that will allow me to take a picture from my react app which accesses the web cam, then I need to upload the image to google cloud storage using a Hapi node.js server. The problem I'm encountering is that the react app snaps a picture and gives me this blob string (I actually don't even know if that's what it's called) But the string is very large and looks like this (I've shortened it due to it's really large size:
"imageBlob": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/...
I'm finding it hard to find resources that show me how to do this exactly, I need to upload that blob file and save it to a google cloud storage bucket.
I have this in my app so-far:
Item.postImageToStorage = async (request, h) => {
const image = request.payload.imageBlob;
const projectId = 'my-project-id'
const keyFilename = 'path-to-my-file'
const gc = new Storage({
projectId: projectId,
keyFilename: keyFilename
})
const bucket = gc.bucket('my-bucket.appspot.com/securityCam');
const blob = bucket.file(image);
const blobStream = blob.createWriteStream();
blobStream.on('error', err => {
h.response({
success: false,
error: err.message || '=-->' + err
})
});
console.log('===---> ', 'no errors::::')
blobStream.on('finish', () => {
console.log('done::::::', `https://storage.googleapis.com/${bucket.name}/${blob.name}`)
// The public URL can be used to directly access the file via HTTP.
const publicUrl = format(
`https://storage.googleapis.com/${bucket.name}/${blob.name}`
);
});
console.log('===---> ', 'past finish::::')
blobStream.end(image);
console.log('===---> ', 'at end::::')
return h.response({
success: true,
})
// Utils.postRequestor(path, payload, headers, timeout)
}
I ge to the success message/response h.response but no console logs appear except the ones outside of the blobStream.on I see all that start with ===---> but nothing else.
Not sure what I'm doing wrong, thanks in advance!
At the highest level, let us assume you want to write into file my-file.dat that is to live in bucket my-bucket/my-folder. Let us assume that the data you want to write is a binary chunk of data that is stored in a JavaScript Buffer object referenced by a variable called my_data. We would then want to code something similar to :
const bucket = gc.bucket('my-bucket/my-folder');
const my_file = bucket.file('my-file.dat');
const my_stream = my_file.createWriteStream();
my_stream.write(my_data);
my_stream.end();
In your example, something looks fishy with the value you are passing in as the file name in the line:
const blob = bucket.file(image);
I'm almost imagining you are thinking you are passing in the content of the file rather than the name of the file.
Also realize that your JavaScript object field called "imageBlob" will be a String. It may be that it indeed what you want to save but I can also imagine that what you want to save is binary data corresponding to your webcam image. In which case you will have to decode the string to a binary Buffer. This looks like it will be extracting the string data starting data:image/jpeg;base64, and then creating a Buffer from that by treating the string as Base64 encoded binary.
Edit: fixed typo
I'm making this app where users can have a profile picture (but only one picture each). I got everything set up, but when the pictures are 2mb+, it takes some time to load and actually I just need the pictures to be 50kb or so (only small display of pictures, max 40 pixels). I made some code to put the images directly into the realtime database (convert to canvas and make them a 7kb base64 string). However, this is not really clean and it's better to use Firebase Storage.
Since the new update 3.3.0 you can upload Base64 formatted strings to Storage, using the putString() method. However, when I upload my canvas image (which starts with "data:image/jpeg;base64,"), I get the error:
v {code: "storage/invalid-format", message: "Firebase Storage: String does not match format 'base64': Invalid character found", serverResponse: null, name: "FirebaseError"}.
Does this error occur because of string of the canvas image in the beginning? I've searched all over Stack, but I can't seem to find the answer.
Jeez, I've been busy for a very long time now, but just after I posted this, I've found the answer myself. The solution was to get the base64 variable and remove the first 23 digits (so: "data:image/jpeg;base64,") and upload it to the Firebase Storage. Now it gets accepted and you can put the link in your Realtime Database via:
var storageRef = firebase.storage().ref().child("Whatever your path is in Firebase Storage");
var imageRef = "Your path in the Realtime Database";
storageRef.getDownloadURL().then(function(url) {
imageRef.child("image").set(url);
});
var task = storageRef.putString("Your base64 string substring variable", 'base64').then(function(snapshot) {
console.log('Uploaded a base64 string!');
});
Looks like they have a method in the docs for base64 images that doesn't require you to manually replace the data:image... part:
ref.putString(message, 'data_url').then(function(snapshot) {
console.log('Uploaded a data_url string!');
});
I actually had the same problem and solved it by using putString(message, 'base64') and cutting off data:image/jpeg;base64,.
The method for uploading Base64url formatted string putString(message, 'base64url') didn't work for me. It returned me the same Error as you have. Try using putString(message, 'base64'). Hope it helps!
When using the putString method, you just need to pass in the data part only which is separated from the preceding (media type) part by a comma. You also need to specify the contentType yourself else by default Firebase storage sets it to application/octet-stream and your file will be downloaded instead of rendering if you paste the URL in a browser.
If no contentType metadata is specified and the file doesn't have a file extension, Cloud Storage defaults to the type application/octet-stream
const base64str = "data:image/png;base64,....."
firebase
.storage()
.ref('path/to/image.png') // specify filename with extension
.putString(base64str.split(',')[1], "base64", {contentType: 'image/png'})
// specifying contentType ^^^
References:
Base64 Wikipedia
Upload files with Cloud storage on web
uploadString if you're using modules, otherwise putString according to the doc
import { getStorage, ref, uploadString } from "firebase/storage";
const storage = getStorage();
const storageRef = ref(storage, 'some-child');
// Base64 formatted string
const message2 = '5b6p5Y+344GX44G+44GX44Gf77yB44GK44KB44Gn44Go44GG77yB';
uploadString(storageRef, message2, 'base64').then((snapshot) => {
console.log('Uploaded a base64 string!');
});
// Base64url formatted string
const message3 = '5b6p5Y-344GX44G-44GX44Gf77yB44GK44KB44Gn44Go44GG77yB';
uploadString(storageRef, message3, 'base64url').then((snapshot) => {
console.log('Uploaded a base64url string!');
});
// Data URL string
const message4 = 'data:text/plain;base64,5b6p5Y+344GX44G+44GX44Gf77yB44GK44KB44Gn44Go44GG77yB';
uploadString(storageRef, message4, 'data_url').then((snapshot) => {
console.log('Uploaded a data_url string!');
});