Copy a storage file within a Firebase Cloud Function - javascript

I am launching a cloud function in order to replicate one register I have in Firestore. One of the fields is an image, and the function first tries to copy the image and then duplicate the register.
This is the code:
export async function copyContentFunction(data: any, context: any): Promise<String> {
if (!context.auth || !context.auth.token.isAdmin) {
throw new functions.https.HttpsError('unauthenticated', 'Auth error.');
}
const id = data.id;
const originalImage = data.originalImage;
const copy = data.copy;
if (id === null || originalImage === null || copy === null) {
throw new functions.https.HttpsError('invalid-argument', 'Missing mandatory parameters.');
}
console.log(`id: ${id}, original image: ${originalImage}`);
try {
// Copy the image
await admin.storage().bucket('content').file(originalImage).copy(
admin.storage().bucket('content').file(id)
);
// Create new content
const ref = admin.firestore().collection('content').doc(id);
await ref.set(copy);
return 'ok';
} catch {
throw new functions.https.HttpsError('internal', 'Internal error.');
}
}
I have tried multiple combinations but this code always fail. For some reason the process of copying the image is failing, I am doing anything wrong?
Thanks.

Using the copy() method in a Cloud Function should work without problem. You don't share any detail about the error you get (I recommend to use catch(error) instead of just catch) but I can see two potential problems with your code:
The file corresponding to originalImage does not exist;
The content bucket does not exists in your Cloud Storage instance.
The second problem usually comes from the common mistake of mixing up the concepts of buckets and folders (or directories) in Cloud Storage.
Actually Google Cloud Storage does not have genuine "folders". In the Cloud Storage console, the files in your bucket are presented in a hierarchical tree of folders (just like the file system on your local hard disk) but this is just a way of presenting the files: there aren't genuine folders/directories in a bucket. The Cloud Storage console just uses the different parts of the file paths to "simulate" a folder structure, by using the "/" delimiter character.
This doc on Cloud Storage and gsutil explains and illustrates very well this "illusion of a hierarchical file tree".
So, if you want to copy a file from your default bucket to a content "folder", do as follows:
await admin.storage().bucket().file(`content/${originalImage}`).copy(
admin.storage().bucket().file(`content/${id}`)
);

Related

Node-FTP duplicating operations upon uploading a file

As there are things called 'callback hell'. It was the only way I can get a file from a server to my vps pc, and upload it. The process was simple:
Download a .json file from the ftp server
Edit the .json file on the pc
Upload the .json file and delete the pc's copy.
However my problem was this: Although it downloads once, it returns the upload based on how many times I command it during 1 session (command #1, does it once, command#2, does it twice, etc).
I tried to run it as imperative, but gets nullified. Had to resort to callback hell to run the code almost properly. The trigger works to initialize the command, but the command and session goof'd.
(( //declaring my variables as parameters
ftp=new (require('ftp'))(),
fs=require('fs'),
serverFolder='./Path/Of/Server/',
localFolder='./Path/Of/Local/',
file='some.json',
{log}=console
)=>{
//run server if its ready
ftp.on('ready',()=>{
//collect a list of files from the server folder
ftp.list(serverFolder+file,(errList,list)=>
errList|| typeof list === 'object' &&
list.forEach($file=>
//if the individual file matches, resume to download the file
$file.name===file&&(
ftp.get(serverFolder+file,(errGet,stream)=>
errGet||(
log('files matched! cdarry onto the operation...'),
stream.pipe(fs.createReadStream(localFolder+file)),
stream.once('close',()=>{
//check if the file has a proper size
fs.stat(localFolder+file,(errStat,stat)=>
errStat || stat.size === 0
//will destroy server connection if bytes = 0
?(ftp.destroy(),log('the file has no value'))
//uploads if the file has a size, edits, and ships
:(editThisFile(),
ftp.put(
fs.createReadStream(localFolder+file),
serverFolder+file,err=>err||(
ftp.end(),log('process is complete!')
))
//editThisFile() is a place-holder editor
//edits by path, and object
)
})
)
)
)
)
);
});
ftp.connect({
host:'localHost',
password:'1Forrest1!',
port:'21',
keepalive:0,
debug: console.log.bind(console)
});
})()
The main problem is: it'll return a copy of the command over and over as 'carry over' for some reason.
Edit: although the merits of "programming style" is different than common meta. It all leads to the same issue of callback hell. Any recommendations are needed.
For readability, I had help editing my code to ease difficulty. Better Readability version
The ftp modules API leads to the callback hell. It also hasn't been maintained for a while and is buggy. Try a module with promises like basic-ftp.
With promises the code flow becomes much easier to reason with and errors don't require specific handling, unless you want to.
const ftp = require('basic-ftp')
const fsp = require('fs').promises
async function updateFile(localFile, serverFile){
const client = new ftp.Client()
await client.access({
host: 'localHost',
password: '1Forrest1!',
})
await client.downloadTo(localFile, serverFile)
const stat = await fsp.stat(localFile)
if (stat.size === 0) throw new Error('File has no size')
await editThisFile(localFile)
await client.uploadFrom(localFile, serverFile)
}
const serverFolder = './Path/Of/Server'
const localFolder = './Path/Of/Local'
const file = 'some.json'
updateFile(localFolder + file, serverFolder + file).catch(console.error)

How to read a file being saved to Parse server with Cloud Code, before actually saving it?

I'm trying to use Cloud Code to check whether a user-submitted image is in a supported file type and not too big.
I know I need to do this verification server-side and I think I should do it with Cloud Code using beforeSave – the doc even has a specific example about data validation, but it doesn't explain how to handle files and I couldn't figure it out.
I've tried the documented method for saving files, ie.
file = fileUploadControl.files[0];
var parseFile = new Parse.File(name, file);
currentUser.set("picture", parseFile);
currentUser.save();
and in the Cloud Code,
Parse.Cloud.beforeSave(Parse.User, (request, response) => { // code here });
But 1. this still actually saves the file on my server, right? I want to check the file size first to avoid saving too many big files...
And 2. Even then, I don't know what to do in the beforeSave callback. It seems I can only access the URL of the saved image (proof that it has been uploaded), and it seems very counter-intuitive to have to do another https request to check the file size and type before deciding whether to proceed with attaching the file to the User object.
(I'm currently using remote-file-size and file-type to check the size and type of the uploaded file, but no success here either).
I also tried calling a Cloud function, but it feels like I'm not doing the right thing, and besides I'm running into the same issues.
I can call a Cloud function and pass a saved ParseFile as a parameter, and then I know how to save it to the User object from the Cloud Code using the masterkey, but as above it still involves uploading the file to the server and then re-fetching it using its URL.
Am I missing anything here?
Is there no way to do something like a beforeSave on Parse.File, and then stop the file from being saved if it doesn't meet certain criteria?
Cheers.
If you have to do something with files, parse lets you overwrite the file adapter to handle file operations.
You can indicate the file adapter to use in your ParseServer instatiation:
var FSStoreAdapter = require('./file_adapter');
var api = new ParseServer({
databaseURI: databaseUri ,
cloud: process.env.CLOUD_CODE_MAIN || __dirname + '/cloud/main.js',
appId: process.env.APP_ID,
filesAdapter: fs_store_adapter, // YOUR FILE ADAPTER
masterKey: process.env.MASTER_KEY, //Add your master key here. Keep it secret!
serverURL: "https://yourUrl", // Don't forget to change to https if needed
publicServerURL: "https://yourUrl",
liveQuery: {
classNames: ["Posts", "Comments"] // List of classes to support for query subscriptions
}
maxUploadSize: "500mb" //you will now have 500mb limit :)
});
That said, you can also specify a maxUploadSize in your instatiation as you can see in the last line.
you have to use save in background
file = ParseFile("filename", file)
file?.saveInBackground({ e ->
if (e == null) {
} else {
Toast.makeText(applicationContext, "Error: $e", Toast.LENGTH_SHORT).show()
e.printStackTrace()
Log.d("DEBUG", "file " + e.code)
}
}, { percentDone ->
Log.d("DEBUG", "file:" + percentDone!!)
})

Preventing runaway AWS Lambda function triggers

I have a Lambda function that is triggered when a folder object -- for example,
67459e53-20cb-4e7d-8b7a-10e4cd165a44
is created in the root bucket.
Also in the root is index.json, the content index -- a simple array of these folders. For example, { folder1, folder2, ..., folderN }.
Every time a folder object (like above) is added, the Lambda function triggers, gets index.json, adds the new folder object to the JSON array, and then puts index.json back.
Obviously, this createObject event is going to trigger the same Lambda function.
My code, below, should only process the event object if it's a folder; i.e., a key object with a / at the end. (A stackoverflow user was kind enough to help me with this solution.)
I have tested this code locally with lambda-local and everything looks good. My concern is (fear of God) that I could have RUNAWAY EXECUTION.
I have scoured the Lambda best practices and googled for "infinite loops" and the like, but cannot find a way to ENSURE that my Lambda won't execute more than, say, 50 times per day.
Yes, I could have the Lambda that actually creates the folder also write to index.json but that Lambda is part of the AWS Video-on-Demand reference example, and I don't really understand it yet.
Two questions: Can I configure notifications in S3 such that it filters on a (random folder key name with a) suffix of / as described
here? And/Or how can I configure this Lambda in the console to absolutely prevent runaway execution?
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var util = require('util');
// constants
const VOD_DEST_FOLDER = 'my-triggering-bucket'; //not used bc part of event object
const CONTENT_INDEX_FILENAME = 'index.json';
// get reference to S3 client
var s3 = new AWS.S3();
exports.handler = async (event) => {
try {
console.log('Event', JSON.stringify(event));
// Bucket name.
const triggerBucket = event.Records[0].s3.bucket.name;
// New folder key added.
const newKey = event.Records[0].s3.object.key;
// Add newKey to content index ONLY if it is a folder object. If any other object
// is added in the bucket root then it won't result in new write.
if (newKey.indexOf('/') > -1) {
// Get existing data.
let existing = await s3.getObject({
Bucket: triggerBucket,
Key: CONTENT_INDEX_FILENAME
}).promise();
// Parse JSON object.
let existingData = JSON.parse(existing.Body);
// Get the folder name.
const folderName = newKey.substring(0, newKey.indexOf("/"));
// Check if we have an array.
if (!Array.isArray(existingData)) {
// Create array.
existingData = [];
}
existingData.push(folderName);
await s3.putObject({
Bucket: triggerBucket,
Key: CONTENT_INDEX_FILENAME,
Body: JSON.stringify(existingData),
ContentType: 'application/json'
}).promise();
console.log('Added new folder name ' + folderName);
return folderName;
} else {
console.log('Not a folder.');
return 'Ignored';
}
}
catch(err) {
return err;
}
};
You can configure the S3 notifications with key name filtering. Here's a step by step guide on how to do it in the web console. I think if you add a / suffix filter to the notification that triggers your Lambda, you will achieve your goal.

Electron will-download keeps getting interrupted

I am trying to download a file, but it keeps getting interrupted, and I have no idea why. I can not find any information on how to debug the reason it got interrupted either.
Here is where I am saving the file:
C:\Users\rnaddy\AppData\Roaming\Tachyon\games\murware\super-chain-reaction\web.zip
window.webContents.session.on('will-download', (event, item, webContents) => {
let path = url.parse(item.getURL()).pathname;
let dev = path.split('/')[3] || null;
let game = path.split('/')[4] || null;
if (!dev && !game) {
item.cancel();
} else {
item.setSavePath(Settings.fileDownloadLocation(dev, game, 'web'));
item.on('updated', (event, state) => {
let progress = 0;
if (state == 'interrupted') {
console.log('Download is interrupted but can be resumed');
} else if (state == 'progressing') {
progress = item.getReceivedBytes() / item.getTotalBytes();
if (item.isPaused()) {
console.log('Download is paused');
} else {
console.log(`Received bytes: ${item.getReceivedBytes()}; Progress: ${progress.toFixed(2)}%`);
}
}
});
}
});
Here is my listener that will trigger the above:
ipcMain.on(name, (evt) => {
window.webContents.downloadURL('http://api.gamesmart.com/v2/download/murware/super-chain-reaction');
});
Here is the output that I am getting in my console:
Received bytes: 0; Progress: 0.00%
Received bytes: 233183; Progress: 0.02%
Download is interrupted but can be resumed
I have a host file setup:
127.0.0.1 api.gamesmart.com
When I try to access the path http://api.gamesmart.com/v2/download/murware/super-chain-reaction in chrome, the file downloads just fine into my Downloads folder. So, what is causing this?
If you set the specific directory for downloading, you should use full file path with the file name in item.setSavePath() method. The best way to do it, fetching the file name from downloaditem object (item in your case) itself. You can use item.getFilename() to get the name of the current download item easily. here is the doc
And also there is a good way to get frequently used public system directory paths in electron. That is, using app.getPath(name) method. name would be the pre-defined String by electron for several directories. here is the doc
So, your complete setSavePath function would be,app.getPath("downloads") + "/" + item.getFilename()
In your case, if you are OK with your file path extraction method, only thing you are missing is filename at the end of the download path.
Of course you can use any other string as the file name if you wish. But remember to put correct extension though. :)
My solution was to use the correct Windows path separator (\), .e.g. 'directory\\file.zip'. Generally, Node.js uses / for any platform, but this seems to be sensitive about the path separator.

How to create a folder in Firebase Storage?

So the new Firebase has support for storage using Google Cloud Platform.
You can upload a file to the images folder using:
var uploadTask = storageRef.child('images').put(file, metadata);
What if you want to create a subfolder images/user1234 dynamically using code?
The offical sample does not show how to do that, nor the official guide or reference docs.
Is the Firebase Console the only place where folders can be created manually?
The Firebase Storage API dynamically creates "folders" as intermediate products: if you create a file at images/user1234/file.txt, all intermediate "folders" like "images" and "user1234" will be created along the way. So your code becomes:
var uploadTask = storageRef.child('images/user1234/file.txt').put(file, metadata);
Note that you need to include the file name (foo.txt for example) in the child() call, since the reference should include the full path as well as the file name, otherwise your file will be called images.
The Firebase Console does allow you to create a folder, since it's the easiest way to add files to a specific folder there.
But there is no public API to create a folder. Instead folders are auto-created as you add files to them.
You most certainly can create directories... with a little bit of playing with the references I did the following.
test = (e,v) => {
let fileName = "filename"
let newDirectory = "someDir"
let storage = firebase.storage().ref(`images/${newDirectory}/${fileName}`)
let file = e.target.files[0]
if(file !== undefined && file.type === "image/png" ) {
storage.put(file)
.then( d => console.log('you did it'))
.catch( d => console.log("do something"))
}
}
String myFolder = "MyImages";
StorageReference riversRef = storageReference.child(myFolder).child("images/pic.jpg");
Firebase is lacking very important functionality, there's always the need to be doing some tricks to emulate behaviours that should be standard.
If you create a folder manually from the Firebase console it will
persist even when there are no more files in it.
If you create a folder dynamically and all files get deleted at some
point, the folder will disappear and be deleted as well.
I implemented a file manager using Firebase Storage so when a user wants to upload a file he can do it through this interface not from something external to the app as is the Firebase Console. You want to give the user the option to reorganize the files as he wants, but something as common as creating a new folder cannot be done without tricks, why? just because this is Firebase.
So in order to emulate this behaviour what I came up with was the following:
Create a reference with the new folder name.
Create a reference for a "ghost" file as child of the folder's reference and give it always the same fixed name, eg. '.ghostfile'
Upload the file to this newly created folder. Any method is valid, I just use uploadString.
Every time I list the files of a reference, exclude any file named as before. So this "ghost" file is not shown in the file manager.
So an example to create a foler:
async function createFolder (currentRef: StorageReference, folderName: string) {
const newDir = ref(currentRef, name)
const ghostFile = ref(newDir, '.ghostfile')
await uploadString(ghostFile, '')
}
And an example to list the files:
async function loadLists (ref: StorageReference) {
const { prefixes, items } = await listAll(ref)
return {
directories: prefixes,
files: items.filter(file => file.name !=== '.ghostfile')
}
}
Firebase console allows you to create a folder. I don't think there is another way to create a folder.

Categories

Resources