My Requirement is to copy, list of the files from client(browser) to azure blob using SAS URL. So I am using azure-sdk-for-node, but its look like its not working for me. Can someone recommend any solution or any other library ?.
var azure = require('azure-storage');
var blobSvc = azure.createBlobServiceWithSas('https://XXXXXXXXXX', 'XXXXXXXXX"');
blobSvc.createBlockBlobFromBrowserFile('testmigration', 'taskblob', 'abc.txt', function (error, result, response){
if (!error) {
console.log('file uploaded failed', error);
} else {
console.log('file uploaded success');
}})
ERROR : blobSvc.createBlockBlobFromBrowserFile is not a function
Related
I'm trying to make an endpoint in NodeJS/Express for downloading content from my AWS S3 Bucket.
It works well, I can download the file in the client side but I can also see the stream preview in the Network tab which is annoying...
QUESTION
I'm wondering if what I'm doing is correct and a good practice.
Also would like to know if it's normal to see the output stream in the Network tab.
How should I properly send I file from S3 to my client application using NodeJS/Express?
I'm pretty sure other websites requests don't let you preview the content with a: "Fail to load response data".
This is what I do in my NodeJS application to get the stream file from AWS S3:
download(fileId) {
const fileObjectStream = app.s3
.getObject({
Key: fileId
})
.createReadStream();
this.res.set("Content-Type", "application/octet-stream");
this.res.set(
"Content-Disposition",
'attachment; filename="' + fileId + '"'
);
fileObjectStream.pipe(this.res);
}
And in the client side I can see this:
I think the issue is with the header :
//this line will set proper header for file and make it downloadable in client's browser
res.attachment(key);
// this will execute download
s3.getObject(bucketParams)
.createReadStream()
.pipe(res);
So code should be like this (This is what I am doing it in my project handling file as res.attachment or res.json in case of error so client can display error to end user) :
router.route("/downloadFile").get((req, res) => {
const query = req.query; //param from client
const key = query.key;//param from client
const bucketName = query.bucket//param from client
var bucketParams = {
Bucket: bucketName,
Key: key
};
//I assume you are using AWS SDK
s3 = new AWS.S3({ apiVersion: "2006-03-01" });
s3.getObject(bucketParams, function(err, data) {
if (err) {
// cannot get file, err = AWS error response,
// return json to client
return res.json({
success: false,
error: err
});
} else {
res.attachment(key); //sets correct header (fixes your issue )
//if all is fine, bucket and file exist, it will return file to client
s3.getObject(bucketParams)
.createReadStream()
.pipe(res);
}
});
});
I need to figure out where my files are downloading when I use the filesDownload(). I don't see an argument for file destination. Here's my code:
require('isomorphic-fetch');
var Dropbox = require('dropbox').Dropbox;
var dbx = new Dropbox({ accessToken: 'accessToken', fetch});
dbx.filesDownload({path: 'filepath}).
then(function(response) {
console.log(response);
})
.catch(function(error) {
console.log(error);
});
I'm getting a successful callback when I run the code but I don't see the file anywhere.
I need to know where my files are downloading to and how to specify the file destination in my function.
Thanks,
Gerald
I've used the function as described in the SDK's documentation (http://dropbox.github.io/dropbox-sdk-js/Dropbox.html#filesDownload__anchor) but I have no idea where my file goes.
Expected Result: Files are downloaded to Dropbox to path that I have designated.
Actual Results: I get a successful callback from Dropbox but I cannot find the files downloaded.
In Node.js, the Dropbox API v2 JavaScript SDK download-style methods return the file data in the fileBinary property of the object they pass to the callback (which is response in your code).
You can find an example of that here:
https://github.com/dropbox/dropbox-sdk-js/blob/master/examples/javascript/node/download.js#L20
So, you should be able to access the data as response.fileBinary. It doesn't automatically save it to the local filesystem for you, but you can then do so if you want.
You need to use fs module to save binary data to file.
dbx.filesDownload({path: YourfilePath})
.then(function(response) {
console.log(response.media_info);
fs.writeFile(response.name, response.fileBinary, 'binary', function (err) {
if (err) { throw err; }
console.log('File: ' + response.name + ' saved.');
});
})
.catch(function(error) {
console.error(error);
});
I would upload an audio file to azure blob storage.
First I make a http request to url to get the file .
Then I would "directly" save it to azure blob storage without the need to store it in server then upload it.
Here's my code:
request
.get(url, {
auth: {
bearer: token
}
})
.on("response", async function(response) {
const res = await blobService.createAppendBlobFromStream(
"recordings", // container name
"record.wav",
response, // should be stream
177777, // stream length
function(err, res) {
try {
console.log(res);
} catch (err) {
console.log(err);
}
}
);
});
Actually when I upload a file in blob storage and check my database I get an empty file with no data inside, I think I'm not sending the data stream correcty.
What I expect is to get the audio file in blob storage with data inside that I get from the get request
I should also specify the stream length but I don't know how to get it I putted a random number but it should be the right stream length. I have checked response object but I havn't found that infomation.
I think you can't upload a file directly so first of all create a folder in your blob storage then create a file.
Read the selected file data and write it into the created file using file stream or else.
here is the upload file code,
app.post('/upload',function(req,res){
if(req.files.fileInput){
var file = req.files.fileInput,
name = file.name,
type = file.mimetype;
var uploadpath = __dirname + '/uploads/' + name;
file.mv(uploadpath,function(err){
if(err){res.send("Error Occured!")}
else {res.send('Done! Uploading files')}
});
}
else {
res.send("No File selected !");
res.end();
};
})
I have successfully uploaded files to Firebase's storage via Google Cloud Storage through JS! What I noticed is that unlike files uploaded directly, the files uploaded through Google Cloud only have a Storage Location URL, which isn't a full URL, which means it cannot be read! I'm wondering if there is a way to generate a full URL on upload for the "Download URL" part of Firebase's actual storage.
Code being used:
var filename = image.substring(image.lastIndexOf("/") + 1).split("?")[0];
var gcs = gcloud.storage();
var bucket = gcs.bucket('bucket-name-here.appspot.com');
request(image).pipe(bucket.file('photos/' + filename).createWriteStream(
{metadata: {contentType: 'image/jpeg'}}))
.on('error', function(err) {})
.on('finish', function() {
console.log(imagealt);
});
When using the GCloud client, you want to use getSignedUrl() to download the file, like so:
bucket.file('photos/' + filename).getSignedUrl({
action: 'read',
expires: '03-17-2025'
}, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from this URL.
request(url, function(err, resp) {
// resp.statusCode = 200
});
});
You can either:
a) Create a download url through the firebase console
b) if you attempt to get the downloadurl programmatically from a firebase client, one will be created on the fly for you.
I created a google project, and setup all I need to use google drive API with JWT crementials.
It doesn't need any auth2 authentification since it's a server-server communication, user are not involved in the process.
This is working fine but why it's not using my account drive?
If I create a folder or a file, I can"t see it in google drive, so is this using a different storage, and if so, do I have a way to see all my files and folder like a normal google drive account?
I'm using nodeJs and so far this worked :
var google = require('googleapis');
var drive = google.drive('v3');
var config = require('../config/config');
var jwtClient = new google.auth.JWT(config.google.drive.client_email, null, config.google.drive.private_key, ['https://www.googleapis.com/auth/drive'], null);
jwtClient.authorize(function(err, tokens) {
if (err) {
console.log(err);
return;
}
// Make an authorized request to list Drive files.
//drive.files.create({
// auth: jwtClient,
// resource: {
// mimeType: 'application/vnd.google-apps.folder',
// title: 'my new folder'
// }
//},function(err,response){
// if(err){
// console.log('error at gdrive creat folder: ' + err);
// }else{
// console.log('create response: ');
// }
//});
drive.files.list({ auth: jwtClient }, function(err, resp) {
// handle err and response
console.log('err', err);
console.log('resp', resp);
});
});
I'm assuming you're referring to Service Account for the server-to-server interaction.
Its not going in your account since its going to the Service Account's configured user. You can delegate the domain-wide authority to the Service Account but only if you have Google Apps for Work