Getting Google Cloud Platform custom metadata " firebaseStorageDownloadToken" programmatically - javascript

I am using Firebase Cloud Storage.
I know how to get the URL of a downloaded file (tags.txt) using Firebase SDK function running on a client javascript :
storage.ref('tags.txt').getDownloadURL().then((url) => {
console.log(url);
});
I want to get the downloadURL from Node.JS . I do know this function does not exists for Node.JS
However, In Google Cloud Platform you can get the following key/value pair :
firebaseStorageDownloadTokens : 0ea9a60a-7719-4c6b-9cb5-7fcf69d7c633, the value being the token you want. From this token I can easily build the downloadURL I need.
The path to get there is:
Cloud Storage / "Bucket name" / Bucket details / tags.txt / EDIT METADATA / Custom metadata.
I try this code to access this metadata :
async function getBucketMetadata() {
const bucketName = "gs://tags.appspot.com";
try {
// Get Bucket Metadata
let metadata = await admin.storage().bucket(bucketName).file('tags.txt').getMetadata();
console.log(metadata)
}
catch (err) {
console.log(err.message)
}
}
I got keys/values (not a real project though) info such as:
bucket:'tags.appspot.com'
contentType:'text/plain'
crc32c:'Y1Sdxw=='
etag:'CI1EETD18Co9vECEAE='
generation:'162694124484794756'
id:'tags-admin.appspot.com/tags.txt/162694431484794756'
kind:'storage#object'
md5Hash:'P1YSFER4xSf5p0/KWrdQWx1z1Lyg=='
mediaLink:'https://storage.googleapis.com/download/storage/v1/b/tags-admin.appspot.com/o/tags.txt?generation=162694443184794756&alt=media'
metageneration:'1'
name:'tags.txt'
selfLink:'https://www.googleapis.com/storage/v1/b/tags-admin.appspot.com/o/tags.txt'
size:'5211247'
storageClass:'STANDARD'
timeCreated:'2021-07-22T09:01:24.862Z'
timeStorageClassUpdated:'2021-07-22T09:01:24.862Z'
updated:'2021-07-22T09:01:24.862Z'
But nothing regarding the key/value pair I want : firebaseStorageDownloadTokens : 0ea9a60a-7719-4c6b-9cb5-7fcf69d7c633
If the key/value can be seen on Google Cloud Platform , I do believe the key/value is also accessible via some code.
Your help is appreciated.

I mixed up two projects. I re-tried it, and its work pretty nicely. Using this method I can retrieve the file token and build the file URL around it on the back-end. The front-end function getDownloadURL() is not longer required.
Thanks for your help anyway.
The code become:
async function getBucketMetadata() {
const bucketName = "gs://tags.appspot.com";
try {
// Get Bucket Metadata
let metadata = await admin.storage().bucket(bucketName).file('tags.txt').getMetadata();
console.log(metadata[0].metadata.firebaseStorageDownloadTokens)
}
catch (err) {
console.log(err.message)
}
}

Related

'You Look Like a Robot' Error , while using puppeteer in firebase cloud functions

Im using the puppeteer package , in order to scrap a web page data that is fetched by clicking a button in this page
this are the presetting that I'm using:
const puppeteer = require('puppeteer-extra')
const StealthPlugin = require('puppeteer-extra-plugin-stealth')
puppeteer.use(StealthPlugin())
// Add adblocker plugin to block all ads and trackers (saves bandwidth)
const AdblockerPlugin = require('puppeteer-extra-plugin-adblocker')
puppeteer.use(AdblockerPlugin({ blockTrackers: true }))
those setting are made in order that I will not be detected as a robot.
here what I'm doing :
(basically , creating a request by clicking a button , then this request return a json with a data that fill up some text info in a label , then I'm reading the data from that label
here's how im clicking the button :
const box = await btn.boundingBox();
const x = box.x + (box.width/2);
const y = box.y + (box.height/2);
console.log(x, y);
page.mouse.move(x,y,{step:1});
page.mouse.click(x,y)
await page.waitForTimeout(4000);
then afterwards ---> I'm getting the data from the data:
const [result] = await page.$x('//*[#id="content"]/div[1]/div[1]/div/div[2]/div');
// const txt = await result.evaluate.toString
let value = await page.evaluate(el => el.textContent, result);
console.log(value);
console.log('done?');
await browser.close();
const dic = {};
dic['status'] = 200;
dic['data'] = {"message": value};
response.send(dic);
I'm also using the 'on' method in order to see if the im getting a response from the action of clicking the button , like so:
await page.on('response', async response =>{
try {
console.succ(await response.json());
} catch (error) {
//
// console.error(error);
}
});
and it sure get one.
the problem is ---> that when I'm deploying it to the firebase cloud functions server,
firebase deploy --only functions
and then triggering the function -->
I'm getting a json that look like this :
{ success: false, message: 'You look like a robot.' }
But when deploying the same code to my local host like so
firebase serve --only functions
and then triggering the function -->
I'm not detected as a robot
and getting the json with a successful result --> and with that data that the clicking of a button supposed to fetch.
this is so weird , I'm trying to think that there's a connections between the firebase cloud functions and reCAPTCHA , because both are a google services
but, its not seem's reasonable for it to be true .
that being said, what could be the reason for this?
all that change is the environment that the code runs from.
do you have any idea why this is happening ?
and how to solve it of course .
Since your function runs properly locally, it's almost certainly not the function itself.
Sites take a variety of different approaches to detect bots, one of which is blocking traffic from known data centers like Google Cloud's. Using a residential IP proxy like those provided by BrightData will probably circumvent this.
I'm facing the same issue while using Puppeteer in Firebase Cloud Functions.
I'm using a residential IP proxy with the following set of packages puppeteer-extra, puppeteer-extra-plugin-stealth, puppeteer-extra-plugin-anonymize-ua', and user-agents`.
On localhost, all is working as expected while running Puppeteer in firebase Cloud Functions I'm getting a 404 response from the requested URL. So there must be some difference.

Create multiple Firebase Instances for the same project in Node.js

I have a Node.js server, inside which I want to have two firebase instances.
One instance should use the JavaScript SDK and will be used to provide authentication - login/register. The other instance should use the Admin SDK and will be used to read/write from the Realtime Database. I want to use this approach, so that I don't have to authenticate the user before each request to the Realtime DB.
I've read how we're supposed to initialize Firebase instances for multiple projects, but I'm not sure if my issue isn't coming from the fact that both instances are for the same project.
My issue is that I can use the JS SDK without any issue and I can login/register the user, but for some reason I can't get the Admin SDK to work.
Here's how I'm instantiating the apps:
const admin = require("firebase-admin");
const { applicationDefault } = require('firebase-admin/app');
admin.initializeApp({
credential: applicationDefault(),
databaseURL: 'my-database-url'
}, 'adminApp');
const firebase = require("firebase/app");
firebase.initializeApp(my-config);
Now I can use the JS SDK without an issue, but not the Admin SDK. I've created a test endpoint to just get data from my Realtime DB:
app.get("/api/test", (req, res) => {
const uid = 'my-user-UID';
admin.database().ref(`users/${uid}`)
.once('value', (snapshot) => {
if(snapshot) {
console.log('data');
} else {
console.log('no data');
}
});
});
Now here as an approach to getting the data from the Realtime DB, I tried all possible described approaches. Using get with child and all sorts of possible combinations. Here's an example of another approach I used:
get(child(ref(admin.database()), `users/${uid}`)).then((snapshot) => {
if (snapshot.exists()) {
// retrieved data
} else {
// No data
}
}).catch((error) => {
console.error(error);
});
For the first approach I wasn't getting any response at all, like the once wasn't executing. For the second one I think I was getting - typeerror: pathstring.replace is not a function firebase. At some point I was getting a no firebase app '[default]' has been created . These errors don't worry me as much, but since I saw the last error I moved my focus to the initialization of the apps, but still to no avail.
I just need a direction of where my issue might be coming from.
Update:
The solution is to not pass a second argument (app name) to any of the Firebase initializations. Looks like it's not needed in case you're referencing the same project.

List all files using firebase storage gives me 404 when trying to use getDownloadUrl

I'm trying to list files under a folder in a web app like the following:
listRef.listAll().then((res) => {
// I get the list of items here
res.items.forEach((imgRef) => {
// 404 error
imgRef.getDownloadURL().then((url) => {
console.log(url);
mapFile(url, imgRef.metadata);
});
});
});
I successfully list res.items but when I try to use getDownloadURL() I get a 404 reference, I did notice that I get folder/image in the reference of the item and when I browse the file using the firebase storage console it browses the slash "/" encoded as %2F
my references are the following:
const storageRef = firebase.app().storage('gs://some-name').ref();
const listRef = storageRef.child(`${Id1}`);
when I save the image I use the following reference:
const imageRef = storageRef.child(`${Id1}/${this.file.name}`);
Edit:
I'm getting the default bucket as a response however I'm setting up the correct storage bucket endpoint in the storage reference, which is not the default.
I solved it somehow a little bit hacky
since the method listAll() were returning the default bucket name I replaced the bucket name (without the gs//) with the name I needed.
imgRef.location.bucket = 'non-default-bucket-name';
imgRef.getDownloadURL().then(url => url); //<- worked

I keep getting a 403 from Firebase storge when trying to read image files

I'm having a hard time understanding the whole token part of Firebase uploads.
I want to simply upload use avatars, save them to the database and then read them at the client side.
const storageRef = firebase.storage().ref();
storageRef.child(`images/user-avatars/${user.uid}`).put(imageObj);
Then, in my cloud function, I grab the new url like this:
exports.writeFileToDatabase = functions.storage.object().onFinalize(object => {
const bucket = defaultStorage.bucket();
const path = object.name as string;
const file = bucket.file(path);
return file
.getSignedUrl({
action: "read",
expires: "03-17-2100"
})
.then(results => {
const url = results[0];
const silcedPath = path.split("/");
return db
.collection("venues")
.doc(silcedPath[1])
.set({ images: FieldValue.arrayUnion(url) }, { merge: true });
});
});
I've enabled IAM in the Google APIs platform, and have added Cloud functions service agent to the App Engine default service account.
I feel like the exact same configuration has worked before, butt now it sometimes doesn't even write the new url or I get 403 trying to read it. I can't find any explanations or errors to what I'm doing wrong.
EDIT:
Forgot to add this piece of code, but FieldValue is set at the top of the document as
const FieldValue = admin.firestore.FieldValue;
EDIT:
This the exact error I get Failed to load resource: the server responded with a status of 403 ()
And I just got it when I've tried to use this link, which has been generated automatically by the function above, as the source for an image component:
https://storage.googleapis.com/frothin-weirdos.appspot.com/images/user_avatars/yElCIVY4bAY5g5LnoOBhqN6mDNv2?GoogleAccessId=frothin-weirdos%40appspot.gserviceaccount.com&Expires=1742169600&Signature=qSqPuuY4c5xmdnpvfZh39Pw3Vyu2B%2FbGMD1rQwHDBUZTAnKwP11MaOFQt%2BTV53krkIgvJgQT0Xl3UUxkngmW9785fUri75SSPoBk0z4DKyZnEBLxgTGRE8MzmXadQ%2BHDJ3rSI8IkkoomdnANpLsPN9oySshZ1h4BfOBvAmK0hQ4Gge1glH7qhxFjVWfX3tovZoL8e2smhuCRXxDsZtJh0ihbIeZUEnX8lGic%2B9IT6y4OskS2ZlrZNjvM10hcEesoPdHsT4oCvfhCNbUcJcueRKfsWlDCd9m6qmf42WVOc7UI0nE0oEvysMutWY971GVRKTLwIXRnTLSNOr6fSvJE3Q%3D%3D

Allowing users to upload content to s3

I have an S3 bucket named BUCKET on region BUCKET_REGION. I'm trying to allow users of my web and mobile apps to upload image files to these bucket, provided that they meet certain restrictions based on Content-Type and Content-Length (namely, I want to only allow jpegs less than 3mbs to be uploaded). Once uploaded, the files should be publicly accessible.
Based on fairly extensive digging through AWS docs, I assume that the process should look something like this on my frontend apps:
const a = await axios.post('my-api.com/get_s3_id');
const b = await axios.put(`https://{BUCKET}.amazonaws.com/{a.id}`, {
// ??
headersForAuth: a.headersFromAuth,
file: myFileFromSomewhere // i.e. HTML5 File() object
});
// now can do things like <img src={`https://{BUCKET}.amazonaws.com/{a.id}`} />
// UNLESS the file is over 3mb or not an image/jpeg, in which case I want it to be throwing errors
where on my backend API I'd be doing something like
import aws from 'aws-sdk';
import uuid from 'uuid';
app.post('/get_s3_id', (req, res, next) => {
// do some validation of request (i.e. checking user Ids)
const s3 = new aws.S3({region: BUCKET_REGION});
const id = uuid.v4();
// TODO do something with s3 to make it possible for anyone to upload pictures under 3mbs that have the s3 key === id
res.json({id, additionalAWSHeaders});
});
What I'm not sure about is what exact S3 methods I should be looking at.
Here are some things that don't work:
I've seen a lot of mentions of (a very old) API accessible with s3.getSignedUrl('putObject', ...). However, this doesn't seem to support reliably setting a ContentLength -- at least anymore. (See https://stackoverflow.com/a/28699269/251162.)
I've also seen a closer-to-working example using an HTTP POST with form-data API that is also very old. I guess that this might get it done if there are no alternatives but I am concerned that it is no longer the "right" way to do things -- additionally, it seems to doing a lot of manual encrypting etc and not using the official node SDK. (See https://stackoverflow.com/a/28638155/251162.)
I think what might be better for this case in POSTing directly to S3, skipping your backend server.
What you can do is define a policy that explicitly specifies what can be uploaded to and to where, this policy is then signed using an AWS secret access key (using the AWS sig v4, can generate a policy using this).
An example usage of the policy and signature if viewable in the AWS docs
For your uses you can specify conditions like:
conditions: [
['content-length-range, 0, '3000000'],
['starts-with', '$Content-Type', 'image/']
]
This will limit uploads to 3Mb, and Content-Type to only items that begin with image/
Additionally, you only have to generate your signature for policy once (or whenever it changes), which means you don't need a request to your server to get a valid policy, you just hardcode it in your JS. When/if you need to update just regenerate the policy and signature and then update the JS file.
edit: There isn't a method through the SDK to do this as it's meant as way of directly POSTing from a form on a webpage, i.e. can work with no javascript.
edit 2: Full example of how to sign a policy using standard NodeJS packages:
import crypto from 'crypto';
const AWS_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID;
const AWS_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY;
const ISO_DATE = '20190728T000000Z';
const DATE = '20161201';
const REGION = process.env.AWS_DEFAULT_REGION || 'eu-west-1';
const SERVICE = 's3';
const BUCKET = 'your_bucket';
if (!AWS_ACCESS_KEY_ID || !AWS_SECRET_ACCESS_KEY) {
throw new Error('AWS credentials are incorrect');
}
const hmac = (key, string, encoding) => {
return crypto.createHmac("sha256", key).update(string, "utf8").digest(encoding);
};
const policy = {
expiration: '2022-01-01T00:00:00Z',
conditions: [
{
bucket: BUCKET,
},
['starts-with', '$key', 'logs'],
['content-length-range', '0', '10485760'],
{
'x-amz-date': ISO_DATE,
},
{
'x-amz-algorithm': 'AWS4-HMAC-SHA256'
},
{
'x-amz-credential': `${AWS_ACCESS_KEY_ID}/${DATE}/${REGION}/${SERVICE}/aws4_request`
},
{
'acl': 'private'
}
]
};
function aws4_sign(secret, date, region, service, string_to_sign) {
const date_key = hmac("AWS4" + secret, date);
const region_key = hmac(date_key, region);
const service_key = hmac(region_key, service);
const signing_key = hmac(service_key, "aws4_request");
const signature = hmac(signing_key, string_to_sign, "hex");
return signature;
}
const b64 = new Buffer(JSON.stringify(policy)).toString('base64').toString();
console.log(`b64 policy: \n${b64}`);
const signature = aws4_sign(AWS_SECRET_ACCESS_KEY, DATE, REGION, SERVICE, b64);
console.log(`signature: \n${signature}\n`);
You need to get familiar with Amazon Cognito and especially with identity pool.
Using Amazon Cognito Sync, you can retrieve the data across client platforms, devices, and operating systems, so that if a user starts using your app on a phone and later switches to a tablet, the persisted app information is still available for that user.
Read more here: Cognito identity pools
Once you create new identify pool, you can reference it while using S3 JavaScript SDK which will allow you to upload content whit out exposing any credentials to the client.
Example here: Uploading to S3
Please read through all of it, especially the section "Configuring the SDK".
The second part of your puzzle - validations.
I would go about implementing a client-side validation (if possible) to avoid network latency before giving an error. If you would choose to implement validation on S3 or AWS Lambda you are looking for a wait-time until file reaches AWS - network latency.
This is something I know we have in our project, so I'll show you part of the codes:
you first need to post to your own server to get the creds for the upload,
from that you will return the params from the client upload to S3.
these are params you send to the aws s3 service, you will need the bucket, upload path, and the file
let params = {
Bucket: s3_bucket,
Key: upload_path,
Body: file_itself
};
this is the code I have for the actual upload to s3
config.credentials = new AWS.Credentials(credentials.accessKeyId,
credentials.secretAccessKey, credentials.sessionToken);
let s3 = new S3(config);
return s3.upload(params, options).on("httpUploadProgress", handleProgress);
all of those credentials items you get from your backend of course.
On the backend you need to generate a timed, presigned URL and send that URL to the client for accessing the S3 object. Depending on your backend implementation technology you can use the AWS CLI or SDKs (e.g. for Java, .Net, Ruby or Go).
Please refer to CLI docs and SDK docs and more SDK
Content size restriction is not supported in link generation directly. But the link is just there to relay the access rights that the AWS user has.
For using a policy to restrict file size on upload you have to create a CORS policy on the bucket and use HTTP POST for the upload. Please see this link.
your servers acts like a proxy, also responsible for authorization, validation, etc.
Some code snippet:
upload(config, file, cb) {
const fileType = // pass with request or generate or let empty
const key = `${uuid.v4()}${fileType}`; // generate file name:
const s3 = new AWS.S3();
const s3Params = {
Bucket: config.s3_bucket,
Key: key,
Body: file.buffer
};
s3.putObject(s3Params, cb);
}
and then you can send the key to the client and provide further access.

Categories

Resources