I want to get a thumbnail that google drive creates for the stored pdf files. With this function I am listing all the files:
listFiles: function (folderId, onData, onError) {
drive.files.list({
auth: jwtClient,
q: "'" + folderId + "' in parents"
}, function (err, res) {
console.log(res.files)
}
});
The output of a console log for each file looks like this:
{
kind: 'drive#file',
id: '0BapkdhpPsqtgf01YbEJRRlhuaVUf',
name: 'file-name.pdf',
mimeType: 'application/pdf'
}
When I check the google's documentation, it says that metadata of a file should contain all of these properties: https://developers.google.com/drive/v3/reference/files
and there I found: contentHints.thumbnail.image How do I access it?
Ok, the thing is to get the metadata of a file I needed to use a files.get function, not files.list. Another thing is that in the call, field parameter needs to be set. For example:
drive.files.get({
auth: jwtClient,
fileId: fileId,
fields : "thumbnailLink"
})
You can use the fields parameter to change which metadata fields are returned:
drive.files.list({
auth: jwtClient,
q: "'" + folderId + "' in parents",
fields: "files(id, name, mimeType, thumbnailLink)"
}, function (err, res) {
console.log(res.files)
});
You can get essential metadata with following api call.
Workaround for python:
self.__results = drive_service.files().list(pageSize=10, supportsAllDrives = True, fields = "files(kind, id, name, mimeType, webViewLink, hasThumbnail, thumbnailLink, createdTime, modifiedTime, owners, permissions, iconLink, imageMediaMetadata)").execute()
or
self.__results = drive_service.files().list(pageSize=10, supportsAllDrives = True, fields = "*").execute()
to get complete metadata.
Reference: https://developers.google.com/drive/api/v3/reference/files#resource
Related
I am currently on a project for work, in which I worked with a team from our logistics company to integrate our freight services. The way the workflow is currently (this is a user event script by the way), is that the information needed for the shipment is gathered in a custom tab during the item fulfillment. We have a Suitelet that acts as a quote picker, so when the 'Get Quote' button we created is clicked, we get a price sheet with rates for various carriers.
The data is not sent to their system until the submit takes place, and when this happens we transform the relevant data into a bill of lading, which is the returned object.
Currently, we have the returned pdf set to save to a specific folder in the file cabinet, but the file is not directly associated with the transaction record.
I am trying to use the 'mediaitem' field to directly attach the pdf to the record via the files subtab inside of the communication tab. I have tried setting the attachments received folder as the destination instead of the custom 'Freight BOL' we have created in the file cabinet, but this does not attach it to the actual record.
Below is a snippet of our import code (I have altered it to avoid sharing the exact code) that currently saves the 'BOL' pdf file to our file cabinet:
***//above this is the post containing API key and etc***
if (response.code != 200) {
var responseBody = JSON.parse(response.body);
log.error ({
title: 'order #' + sonum + ' shipment import: ' + response.code,
details: responseBody.Message
});
log.error ({
title: 'order #' + sonum + ' shipment import messageBody',
details: JSON.stringify(messageBody)
});
return;
}
//save BOL to Freight BOL folder in File Cabinet
var bolFile = saveBOL(response);
var fileId = bolFile.save();
} catch (e) {
log.error ({
title: 'order #' + sonum + ' error: ' + e.name,
details: e.message
});
log.error ({
title: 'order #' + sonum + ' DLS Import messageBody',
details: messageBody
});
}
}
function saveBOL(response){
var responseBody = JSON.parse(response.body);
var bolFile = file.create({
name: responseBody.FileName,
fileType: file.Type.PDF,
contents: responseBody.FileBytes,
folder: //folderidishere,
isOnline: false
});
var fileId = bolFile.save();
return bolFile;
}
I am struggling to find anything via documentation or SuiteAnswers regarding saving a file as an attachment to an order via SuiteScript 2.0. Any suggestions/help would be greatly appreciated!
Use the attach method of the N/record module.
var id = record.attach({
record: {
type: 'file',
id: bolFile
},
to: {
type: 'itemfulfillment',
id: <internalid of item fulfillment>
}
I'm trying to sent an audio blob on some Google drive folder. To succeed I translate blob in file before sending it.
I received since the starting an error :
Error: File not found.
code: 404, errors: [ { domain: 'global',
reason: 'notFound',
message: 'File not found: 1aazd544z3FOCAsdOA5E7XcOaS3297sU.',
locationType: 'parameter',
location: 'fileId' } ] }
progressive edit : So far I have converted my audio blob in base64 string in order to ease the processing of my blob.
But, I fail always to write a file with my base 64 audio blob :
Here my driveApi.js :
// request data from req.body
var data = req.body.data ; // data variable is presented in form of base64 string
var name = req.body.word ;
(...)
// WRITE FILE AND STORE IT IN BODY HEADER PROPERTY
body: fs.writeFile((name + ".mp3"), data.substr(data.indexOf(',')+1), {encoding: 'base64'}, function(err) {
console.log('File created')
})
Three steps: create a temporary file with your base64 data out of the drive.files.create function, then give this file a specific name -e.g. tempFile, also you can customize this name with a time value. After that, pass this file on a "fs.createReadStream" method to upload it on Google drive.
Some hints:
Firstly - use path.join(__dirname, name + "-" + Date.now() +".ext" ) to create to file name
Secondly - make this process asynchronously to avoid data flow conflict (trying to create file before file is created), so call the drive.files.create after having setting a fs.writeFile function.
Thirdly - Destroy the tempFile after the operation has been done. It allows you to automatize the process.
I let you dive in the methods you need. But basically fs should do the job.
Again, be careful on the data flow and use callback to control it. Your code can crash just because the function gone up in a no-operational way.
Some links :
https://nodejs.org/api/path.html
https://nodejs.org/api/fs.html#fs_fs_writefile_file_data_options_callback
here an instance :
// datavalue = some time value
fs.writeFile(
path.join(__dirname, name + "-" + datevalues +".mp3" ),
data.substr(data.indexOf(',')+1),
{encoding: 'base64'},
// callback
function(err) {
if(err){ console.log("error writting file : " + err)}
console.log('File created')
console.log("WRITTING") // control data flow
fileCreate(name)
})
function fileCreate (name){
// upload file in specific folder
var folderId = "someID";
var fileMetadata = {
'name': name + ".mp3" ,
parents: [folderId]
}; console.log("MEDIA") // control data flow
var media = {
mimeType: 'audio/mp3',
body: fs.createReadStream(path.join(__dirname, name + "-" + datevalues +".mp3" ))
};
drive.files.create({
auth: jwToken,
resource: fileMetadata,
media: media,
fields: 'id'
}, function (err, file) {
if (err) {
// Handle error
console.error(err);
} else {
console.log('File Id: ', file.data.id);
}
// make a callback to a deleteFile() function // I let you search for it
});
}
How about this modification? I'm not sure the condition of blob from reactApp.js. So could you please try to use this modification? In this modification, file or blob from reactApp.js are used.
Modified script :
var stream = require('stream'); // Added
module.exports.uploadFile = function(req){
var file ;
console.log("driveApi upload reached")
function blobToFile(req){
file = req.body.blob
//A Blob() is almost a File() - it's just missing the two properties below which we will add
file.lastModifiedDate = new Date();
file.name = req.body.word;
return file;
}
var bufStream = new stream.PassThrough(); // Added
bufStream.end(file); // Or bufStream.end(### blob from reactApp.js ###) Added
console.log(typeof 42);
// upload file in specific folder
var folderId = "1aa1DD993FOCADXUDNJKLfzfXcOaS3297sU";
var fileMetadata = {
"name": req.body.word,
parents: [folderId]
}
var media = {
mimeType: "audio/mp3",
body: bufStream // Modified
}
drive.files.create({
auth: jwToken,
resource: fileMetadata,
media: media,
fields: "id"
}, function (err, file) {
if (err) {
// Handle error
console.error(err);
} else {
console.log("File Id: ", file.id);
}
console.log("driveApi upload accomplished")
});
}
If this didn't work, I'm sorry.
I am using google's API for node.js
https://www.npmjs.com/package/googleapis
I am trying to get an array of all channels which belong to the person
who logged into my website with his google account.
I am using this scope for this matter:
''https://www.googleapis.com/auth/youtube.readonly'
Now here is part of my code:
app.get("/oauthcallback", function(req, res) {
//google redirected us back in here with random token
var code = req.query.code;
oauth2Client.getToken(code, function(err, tokens) { //let's check if the query code is valid.
if (err) { //invalid query code.
console.log(err);
res.send(err);
return;
}
//google now verified that the login was correct.
googleAccountVerified(tokens, res); //now decide what to do with it
});
});
function googleAccountVerified(tokens, res){ //successfully verified.
//user was verified by google, continue.
oauth2Client.setCredentials(tokens); //save tokens to an object
//now ask google for the user's details
//with the verified tokens you got.
youtube.channels.list({
forUsername: true,
part: "snippet",
auth: oauth2Client
}, function (err, response) {
if(err) {
res.send("Something went wrong, can't get your google info");
return;
}
console.log(response.items[0].snippet);
res.send("test");
});
}
Now, in this console.log:
console.log(response.items[0].snippet);
I am getting the same info, no matter what account I am using to log into my website:
{ title: 'True',
description: '',
publishedAt: '2005-10-14T10:09:11.000Z',
thumbnails:
{ default: { url: 'https://i.ytimg.com/i/G9p-zLTq1mO1KAwzN2h0YQ/1.jpg?v=51448e08' },
medium: { url: 'https://i.ytimg.com/i/G9p-zLTq1mO1KAwzN2h0YQ/mq1.jpg?v=51448e08' },
high: { url: 'https://i.ytimg.com/i/G9p-zLTq1mO1KAwzN2h0YQ/hq1.jpg?v=51448e08' } },
localized: { title: 'True', description: '' } }
if I do console.log(response) which is the entire response
I get:
{ kind: 'youtube#channelListResponse',
etag: '"m2yskBQFythfE4irbTIeOgYYfBU/ch97FwhvtkdYcbQGBeya1XtFqyQ"',
pageInfo: { totalResults: 1, resultsPerPage: 5 },
items:
[ { kind: 'youtube#channel',
etag: '"m2yskBQFythfE4irbTIeOgYYfBU/bBTQeJyetWCB7vBdSCu-7VLgZug"',
id: 'UCG9p-zLTq1mO1KAwzN2h0YQ',
snippet: [Object] } ] }
So, two problems here:
1) How do I get an array of owned channels by the logged user,
inside the array I need objects which will represent each channel and basic info like channel name, profile pic.
2) why am I getting the info of some random youtube channel called "True"
Not sure about question one but for question two you get the information for the channel called true because you are asking for it. forUsername: true
I would hope that once you correct this the response may contain more than one channel if the username has more than one.
Just a follow up to the question about basic info.
You dont use Youtube API to get an account's profile information. Instead, try Retrieve Profile Information with G+:
To retrieve profile information for a user, use the people.get API method. To get profile information for the currently authorized user, use the userId value of me.
JavaScript example:
// This sample assumes a client object has been created.
// To learn more about creating a client, check out the starter:
// https://developers.google.com/+/quickstart/javascript
gapi.client.load('plus','v1', function(){
var request = gapi.client.plus.people.get({
'userId': 'me'
});
request.execute(function(resp) {
console.log('Retrieved profile for:' + resp.displayName);
});
});
Google Sign-in for Websites also enables Getting profile information:
After you have signed in a user with Google using the default scopes, you can access the user's Google ID, name, profile URL, and email address.
To retrieve profile information for a user, use the getBasicProfile() method. For example:
if (auth2.isSignedIn.get()) {
var profile = auth2.currentUser.get().getBasicProfile();
console.log('ID: ' + profile.getId());
console.log('Full Name: ' + profile.getName());
console.log('Given Name: ' + profile.getGivenName());
console.log('Family Name: ' + profile.getFamilyName());
console.log('Image URL: ' + profile.getImageUrl());
console.log('Email: ' + profile.getEmail());
}
below is my file upload code
/** Setting up storage using multer-gridfs-storage */
var storage = GridFsStorage({
gfs : gfs,
filename: function (req, file, cb) {
var datetimestamp = Date.now();
cb(null, file.fieldname + '-' + datetimestamp + '.' + file.originalname.split('.')[file.originalname.split('.').length -1]);
},
/** With gridfs we can store aditional meta-data along with the file */
metadata: function(req, file, cb) {
cb(null, { originalname: file.originalname });
},
root: 'ctFiles' //root name for collection to store files into
});
var upload = multer({ //multer settings for single upload
storage: storage
}).single('file');
/** API path that will upload the files */
app.post('/upload', function(req, res) {
upload(req,res,function(err){
if(err){
res.json({error_code:1,err_desc:err});
return;
}
console.log(res.file);
console.log(res[0].file);
res.json({error_code:0,err_desc:null});
});
});
i want to store user name, email and file path in user collection
this is my UserSchema
var UserSchema = new Schema({
name: String,
email: {
type: String,
lowercase: true
},
filepath: String,
});
this is how image has stored in collection
{
"_id" : ObjectId("58fb894111387b23a0bf2ccc"),
"filename" : "file-1492879681306.PNG",
"contentType" : "image/png",
"length" : 67794,
"chunkSize" : 261120,
"uploadDate" : ISODate("2017-04-22T16:48:01.350Z"),
"aliases" : null,
"metadata" : {
"originalname" : "Front.PNG"
},
"md5" : "404787a5534d0479bd55b2793f2a74b5"
}
this is my expectation result: in user collection i should get data like this
{
"name" :"asdf",
"email" : "asdf#gmail.com",
"filepath":"file/file-1492879681306.PNG"
}
There is a difference between storing in GridFS and storing in an ordinary MongoDb collection. The first is designed to efficiently store files with optionally any additional information while the later allows you to set any schema and store any information. You cannot deal with both the same way.
If what you want is to establish a relationship between a file and a schema in your application, you can do it like this.
Store the desired schema information in the metadata of the file
Pro: All additional data is stored within the file and deleting the file automatically
cleans the additional information.
Con: Queries could become complex because all of them must be prefixed by a metadata
field and all information is mixed together.
This could be the output of one of your files
{
"_id" : ObjectId("58fb894111387b23a0bf2ccc"),
"filename" : "file-1492879681306.PNG",
"contentType" : "image/png",
"length" : 67794,
"chunkSize" : 261120,
"uploadDate" : ISODate("2017-04-22T16:48:01.350Z"),
"aliases" : null,
"metadata" : {
"originalname" : "Front.PNG",
"name" :"asdf",
"email" : "asdf#gmail.com",
"filepath":"file/file-1492879681306.PNG"
},
"md5" : "404787a5534d0479bd55b2793f2a74b5"
}
Setting information like this is easy, just change the metadata function a little bit.
....
/** With gridfs we can store aditional meta-data along with the file */
metadata: function(req, file, cb) {
var metadata = {
originalname: file.originalname,
// get this information somehow
name :"asdf",
email : "asdf#gmail.com",
filepath:"file/file-1492879681306.PNG"
};
cb(null, metadata);
},
....
and this is how you should access them although you could use db.collection or Mongoose to the same purpose.
const mongodb = require('mongodb');
const GridFSBucket = mongodb.GridFSBucket;
const MongoClient = mongodb.MongoClient;
MongoClient.connect('mongodb://yourhost:27017/database').then((db) => {
const bucket = new GridFSBucket(db, {bucketName: 'ctFiles'});
bucket
.find({metadata: {'email' : 'asdf#gmail.com'}})
.toArray()
.then((fileInfoArr) => {
console.log(fileInfoArr);
});
});
Then you can use the fileInfo array to create streams and read the file.
Store the schema independently and add an ObjectId field that points to the id of the stored file.
Pro: Queries and updates over your UserSchema are easier to compose and understand because schema definitions are stored in different collections.
Con: Now you have two collections to worry about and have to manually keep
both in sync; when a file is deleted you should delete the User data and viceversa.
This is how your UserSchema could look like
var UserSchema = new Schema({
name: String,
email: {
type: String,
lowercase: true
},
filepath: String,
fileId: Schema.Types.ObjectId
});
and this is how you access the files
UserModel.findOne({'email' : 'asdf#gmail.com'}, function (err, user) {
// deal with error
// store and get the db object somehow
const bucket = new GridFSBucket(db, {bucketName: 'ctFiles'});
// A download stream reads the file from GridFs
const readStream = bucket.openDownloadStream(user.fileId));
readStream.pipe(/* some writable stream */);
});
Remember that GridFs stores files, therefore you should use streams whenever possible to read and write data handling backpressure correctly.
PD:
Stuff like root belong to GridStore which is obsolete. You should use GridFSBucket whenever possible. The new version of multer-gridfs-storage also deals with bucketName instead of root.
I have a javascript code that gets the xml list http://BUCKETNAME.s3.REGION.amazonaws.com/ of s3 bucket and uses it as a playlist:
AWS.config=
{ "accessKeyId": "ACCESS KEY",
"secretAccessKey": "SECRET KEY",
"region": "REGION" };
// Create S3 service object
s3 = new AWS.S3();
var params = {
Bucket: 'BUCKET NAME', /* required */
Delimiter: '',
EncodingType: 'url',
Marker: '',
MaxKeys: 0,
Prefix: '',
RequestPayer: 'requester'
};
s3.listObjects(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else
{
console.log('the list is approved '); // successful response
// Here is the function that convert the file list in the xml to an array
var b = document.documentElement;
b.setAttribute('data-useragent', navigator.userAgent);
b.setAttribute('data-platform', navigator.platform);
var radioName;
var radioTitle;
var tracklength= 0;
// setupPlayer function
function setupPlayer(href,name){
radioName= href;
radioTitle= name;
$.ajax({
type: "GET",
url: "http://BUCKETNAME.s3.REGION.amazonaws.com/?prefix=radio/"+radioName+"/",
dataType: "xml",
success: function(xml){
//tracklength=0;
tracks =[];
$(xml).find('Contents').each(function(){
tracklength=tracklength+1;
tracks.push({
"track": tracklength,
"file" : $(this).find('Key').text()
});
});
radio(tracks);
},
error: function() {
alert("An error occurred while processing XML file.");
}
});
}
}
As you can see, in this code I am taking the XML file and add a radio name (which is the folder name) , after that the ajax will save all the file names in this folder to an array tracks.
This code works perfectly if there is a list grantee permission for Everyone. So there is no need for aws config here. I can run the code inside else statement in listObjects function and it will give me the same response.
What I do want is to give the grant access to this key only, to make this function not work without the access key and secret key.
So no one can access the xml list except those who have the access and secret keys.
Is that possible ?
(This is not the full code, but you got the Idea, accessing the XML file of the bucket and getting the keys an saving them to an array).
You should use s3.getObject (http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property) to get your xml files instead of $.ajax call.