Save images on Google Storage on javascript - javascript

My goal is to store an image on S3 or Google Storage and save the link to the database. How can I do it? There is some free solution?
Can someone link me a code sample for to do that?
I never used Google Storage or S3 before.
I pick an image like that:
handleImage = e => {
e.preventDefault();
let reader = new FileReader();
let file = e.target.files[0];
reader.onloadend = () => {
this.setState({
file: file,
image: reader.result
});
}
reader.readAsDataURL(file);
this.setState({ imgchange: true })
}
And then send it to server:
this.props.editUser({
img: this.state.image,
})
My server is written with node.js

I've done this exact thing before, though I'm not claiming I had the most straightforward way.
1) sent the image from the client side to the server as a base64 image, took that image, 2) created a buffer, and 3) then used imageMagick to to stream it to my google cloud bucket. Lastly, 4) stored that link to the google cloud bucket on the object in your database as the imgLink or what have you so it can show in your front-end application.
Some important things to require at the top
var gm = require('gm').subClass({ imageMagick: true });
var gcs = require('#google-cloud/storage')({
keyFilename: sails.config.gcloud.keyFileName,
projectId: sails.config.gcloud.projectId
});
var bucket = gcs.bucket(sails.config.gcloud.bucketname);
Step 1 - Sending base64 image to backend service and decoding it
imageControllerFirstHitAtEndpoint: function (req, res) {
PictureService.uploadPictureCDNReturnLink(req.body.picture, function (err, imageLink) {
if (err) {
// Handle error...
}
// Success, save that imageLink to whatever db object you want
}
}
Step 2 and 3 - Create buffer with base64 data and Stream it to Google Cloud Bucket
// Step 2 create buffer with base64 data
uploadPictureCDNReturnLink: function(picDataInBase64, cb) {
var imageBuffer;
imageBuffer = PictureService.bufferFromBase64(picDataInBase64);
var file = bucket.file('cool-file-name');
// Step 3 stream it to where it needs to go
gm(imageBuffer)
.stream()
.pipe(file.createWriteStream())
.on('finish', function() {
file.getMetadata(function(err, metadata) {
if (err) {
console.log("Error getting metadata from google cloud", err);
return cb(err);
}
cb(null, metadata.mediaLink);
});
}).on('error', function(err) {
console.log("Got an error uploading to Cloud Storage:", err);
cb(err);
});
}
Step 4 - Save that imageLink to wherever you want
Won't spell this out totally for you, not that hard. Something kinda like:
Organization.findOne(req.body.orgID).exec(function (err, organization) {
if(!organization) {
return res.json(400, { error: "No organization with id: " + req.param('id') });
}
if (err) {
return res.json(400, err);
}
organization.pictureLink = imageLink;
organization.save(function (err) {
return res.json(organization);
});
});
Hope that helps! Should give you an idea of one way to do it.
P.S. A lot of that stuff might be using Sails-like NodeJS conventions, Sails is my backend framework of choice.

Related

Upload file to google drive after http get request

I have two functions in separate files to split up the workflow.
const download = function(url){
const file = fs.createWriteStream("./test.png");
const request = https.get(url, function(response) {
response.pipe(file);
});
}
This function in my fileHelper.js is supposed to take a URL with an image in it and then save it locally to test.png
function uploadFile(filePath) {
fs.readFile('credentials.json', (err, content) => {
if (err) return console.log('Error loading client secret file:', err);
// Authorize a client with credentials, then call the Google Drive API.
authorize(JSON.parse(content), function (auth) {
const drive = google.drive({version: 'v3', auth});
const fileMetadata = {
'name': 'testphoto.png'
};
const media = {
mimeType: 'image/png',
body: fs.createReadStream(filePath)
};
drive.files.create({
resource: fileMetadata,
media: media,
fields: 'id'
}, (err, file) => {
if (err) {
// Handle error
console.error(err);
} else {
console.log('File Id: ', file.id);
}
});
});
});
}
This function in my googleDriveHelper.js is supposed to take the filePath of call and then upload that stream into my google drive. These two functions work on their own but it seems that the https.get works asynchronously and if I try to call the googleDriveHelper.uploadFile(filePath) function after the download, it doesn't have time to get the full file to upload so instead a blank file will be uploaded to my drive.
I want to find a way so that when the fileHelper.download(url) is called, it automatically uploads into my drive.
I also don't know if there is a way to create a readStream directly from the download function to the upload function, so I can avoid having to save the file locally to upload it.
I believe your goal as follows.
You want to upload a file retrieving from an URL to Google Drive.
When you download the file from the URL, you want to upload it to Google Drive without creating the file.
You want to achieve this using googleapis with Node.js.
You have already been able to upload a file using Drive API.
For this, how about this answer?
Modification points:
At download function, the retrieved buffer is converted to the stream type, and the stream data is returned.
At uploadFile function, the retrieved stream data is used for uploading.
When the file ID is retrieved from the response value of Drive API, please use file.data.id instead of file.id.
By above modification, the file downloaded from the URL can be uploaded to Google Drive without creating a file.
Modified script:
When your script is modified, please modify as follows.
download()
const download = function (url) {
return new Promise(function (resolve, reject) {
request(
{
method: "GET",
url: url,
encoding: null,
},
(err, res, body) => {
if (err && res.statusCode != 200) {
reject(err);
return;
}
const stream = require("stream");
const bs = new stream.PassThrough();
bs.end(body);
resolve(bs);
}
);
});
};
uploadFile()
function uploadFile(data) { // <--- Modified
fs.readFile("drive_credentials.json", (err, content) => {
if (err) return console.log("Error loading client secret file:", err);
authorize(JSON.parse(content), function (auth) {
const drive = google.drive({ version: "v3", auth });
const fileMetadata = {
name: "testphoto.png",
};
const media = {
mimeType: "image/png",
body: data, // <--- Modified
};
drive.files.create(
{
resource: fileMetadata,
media: media,
fields: "id",
},
(err, file) => {
if (err) {
console.error(err);
} else {
console.log("File Id: ", file.data.id); // <--- Modified
}
}
);
});
});
}
For testing
For example, when above scripts are tested, how about the following script?
async function run() {
const url = "###";
const data = await fileHelper.download(url);
googleDriveHelper.uploadFile(data);
}
References:
Class: stream.PassThrough
google-api-nodejs-client

Get Azure uploaded blob file url

I'm uploading a data stream to Azure Storage,
I would get the link to the blob file.
let insertFile = async function (blobName,stream){
const containerName= 'texttospeechudio';
try{
await blobService.createContainerIfNotExists(containerName, {
publicAccessLevel: 'blob'},(err,result, response) => {
if(!err) {
console.log(result);
}
});
let resultstream = blobService.createWriteStreamToBlockBlob(containerName, blobName,(err,result, response)=>{
console.log(res)
});
stream.pipe(resultstream);
stream.on('error', function (error) {
console.log(error);
});
stream.once('end', function (end) {
console.log(end)
//OK
});
}
catch(err) {
console.log(err);
}
}
I added createWriteStreamToBlockBlob callback , but I'm not getting inside it.
I would find a way to get uploaded file url.
There is no file URL returned in the response according to put-blob's rest spec.
And Azure storage's resource URL can be commonly composed with following pattern:
https://{myaccount}.blob.core.windows.net/{mycontainer}/{myblob}

Upload images for an event using Cloudinary API

Below is my API endpoint for creating an event in my application. I'm also using Cloudinary API for uploading my images and storing the returned URL into my database.
app.post('/event', (req, res) => {
try {
if (req.body.images.length > 0) {
// Creating new Event instance
const event = new Event({
title: req.body.title,
images: [],
});
// Looping over every image coming in the request object from frontend
req.body.images.forEach((img) => {
const base64Data = img.content.split(',')[1];
// Writing the images in upload folder for time being
fs.writeFileSync(`./uploads/${img.filename}`, base64Data, 'base64', (err) => {
if (err) {
throw err;
}
});
/* Now that image is saved in upload folder, Cloudnary picks
the image from upload folder and store it at their cloud space.*/
cloudinary.uploader.upload(`./uploads/${img.filename}`, async (result) => {
// Cloudnary returns id & URL of the image which is pushed into the event.images array.
event.images.push({
id: result.public_id,
url: result.secure_url
});
// Once image is pushed into the array, I'm removing it from my server's upload folder using unlinkSync function
fs.unlinkSync(`./uploads/${img.filename}`);
// When all the images are uploaded then I'm sending back the response
if (req.body.images.length === event.images.length) {
await event.save();
res.send({
event,
msg: 'Event created successfully'
});
}
});
});
}
} catch (e) {
res.status(400).send(e);
}
});
Now My question is how can I convert the above code into more efficient and short?

NodeJs - How to convert chunks of data to new Buffer?

In NodeJS, I have chunks of data from a file upload that saved the file in parts. I'd like to convert this by doing new Buffer() then upload it to Amazon s3
This would work if there was only one chunk but when there are multiple, I cannot figure out how to do new Buffer()
Currently my solution is write the chunks of data into a real file on my own server, then send the PATH of that file to Amazon s3.
How can I skip the file creation step and actually send the buffer the Amazon s3?
i guess you need to use streaming-s3
var streamingS3 = require('streaming-s3');
var uploadFile = function (fileReadStream, awsHeader, cb) {
//set options for the streaming module
var options = {
concurrentParts: 2,
waitTime: 20000,
retries: 2,
maxPartSize: 10 * 1024 * 1024
};
//call stream function to upload the file to s3
var uploader = new streamingS3(fileReadStream, aws.accessKey, aws.secretKey, awsHeader, options);
//start uploading
uploader.begin();// important if callback not provided.
// handle these functions
uploader.on('data', function (bytesRead) {
console.log(bytesRead, ' bytes read.');
});
uploader.on('part', function (number) {
console.log('Part ', number, ' uploaded.');
});
// All parts uploaded, but upload not yet acknowledged.
uploader.on('uploaded', function (stats) {
console.log('Upload stats: ', stats);
});
uploader.on('finished', function (response, stats) {
console.log(response);
cb(null, response);
});
uploader.on('error', function (err) {
console.log('Upload error: ', err);
cb(err);
});
};

Node: Downloading a zip through Request, Zip being corrupted

I'm using the excellent Request library for downloading files in Node for a small command line tool I'm working on. Request works perfectly for pulling in a single file, no problems at all, but it's not working for ZIPs.
For example, I'm trying to download the Twitter Bootstrap archive, which is at the URL:
http://twitter.github.com/bootstrap/assets/bootstrap.zip
The relevant part of the code is:
var fileUrl = "http://twitter.github.com/bootstrap/assets/bootstrap.zip";
var output = "bootstrap.zip";
request(fileUrl, function(err, resp, body) {
if(err) throw err;
fs.writeFile(output, body, function(err) {
console.log("file written!");
}
}
I've tried setting the encoding to "binary" too but no luck. The actual zip is ~74KB, but when downloaded through the above code it's ~134KB and on double clicking in Finder to extract it, I get the error:
Unable to extract "bootstrap" into "nodetest" (Error 21 - Is a directory)
I get the feeling this is an encoding issue but not sure where to go from here.
Yes, the problem is with encoding. When you wait for the whole transfer to finish body is coerced to a string by default. You can tell request to give you a Buffer instead by setting the encoding option to null:
var fileUrl = "http://twitter.github.com/bootstrap/assets/bootstrap.zip";
var output = "bootstrap.zip";
request({url: fileUrl, encoding: null}, function(err, resp, body) {
if(err) throw err;
fs.writeFile(output, body, function(err) {
console.log("file written!");
});
});
Another more elegant solution is to use pipe() to point the response to a file writable stream:
request('http://twitter.github.com/bootstrap/assets/bootstrap.zip')
.pipe(fs.createWriteStream('bootstrap.zip'))
.on('close', function () {
console.log('File written!');
});
A one liner always wins :)
pipe() returns the destination stream (the WriteStream in this case), so you can listen to its close event to get notified when the file was written.
I was searching about a function which request a zip and extract it without create any file inside my server, here is my TypeScript function, it use JSZIP module and Request:
let bufs : any = [];
let buf : Uint8Array;
request
.get(url)
.on('end', () => {
buf = Buffer.concat(bufs);
JSZip.loadAsync(buf).then((zip) => {
// zip.files contains a list of file
// chheck JSZip documentation
// Example of getting a text file : zip.file("bla.txt").async("text").then....
}).catch((error) => {
console.log(error);
});
})
.on('error', (error) => {
console.log(error);
})
.on('data', (d) => {
bufs.push(d);
})

Categories

Resources