Is Firebase storage URL static? - javascript

I have a list of contacts and each of those has a profile photo which is stored in Firebase storage. An official way of getting the images would be to fetch the URL using the Firebase storage SDK and set it as src in img element.
firebaseApp.storage().ref("profilePhotos/" + officeId + ".jpg").getDownloadURL().then(function (url) {
this.photoUrl = url;
}.bind(this)).catch(function (error) {
console.log("Photo error"); // TODO: handler
});
This is quite cumbersome when I have to load multiple files (as in contact list). Is the file URL received above static? Can I store it in the database in the profile information and use it directly?
Thanks

A very common pattern is to store the download URL of a file in Realtime Database for easy use later on. Download URLs should work until you choose to revoke them.

In my experience, the download urls are static. Also if you look at the entry in the database below the download url you can see an option to recreate the download url.
Storing the download url in the Realtime Database is a great way to keep track of those download urls. I would use the push method to hold it in a folder in the database.
The way they use .push in the Realtime Database docs example will create a pattern of storage and retrieval that should solve your problem.
.push for making and entry, chained with .key for retrieval later:
var newPostKey = firebase.database().ref().child('posts').push().key;
.once for reading the data at the reference you want with a .then
firebase.database().ref('/users/' + userId).once('value').then(...)

Related

AWS fetching multiple images

I have following question:
In my first scenario i have an S3 bucket. I can use the Storage API from the amplify SDK
const result = await Storage.put("profile.png", profileImage, { level: "protected" });
and later i can get it
const result = await Storage.get("profile.png", { level: "protected" });
Everything works fine. Everybody can read it, but only i can read/delete/update it.
But now here my question....
In my application a user can be the admin of a group. He can see all group members.
What if he fetch the list of all users. Lets say he fetch 10 users. Does this mean i need to make 10 requests for each image? This also means i need to save somewhere the ID of the other user.
Is this the correct way?
According to the docs Storage.get returns a pre-signed URL. A pre-signed URL is for a single object so yes, for 10 users you'd need to make 10 get requests.
However, there may be alternatives depending on your app requirements. For example, a common approach for S3 is to use a key prefix to group items. You could prepend the group-name/group-id for all profile pictures of people within that group.
S3 allows listing keys by a prefix, which is the Storage.list call. E.g. you could use a format like s3://my-bucket/{group-id}/{user-id}/profile.png then for the admin user you can get all items in the group with a call like const result = await Storage.list('{group-id}/');

Firebase storage downloadURL containing "v0" in the url string

After I'm done uploading image files to my Firebase Storage, I'm getting the downloadURL by using the method:
uploadTask.snapshot.ref.getDownloadURL();
And the URL's are coming back with the following format:
https://firebasestorage.googleapis.com/v0/b/MY_FIREBASE_PROJECT.appspot.com/o/blog-post-images%2FFILE_NAME.jpg?alt=media&token=TOKEN_VALUE
QUESTION
What is the v0 inside of my URL? Is this some kind of versioning? Can I be sure that those URLs are stable and won't change ever?
NOTE:
I'm saving those URLs to my Firestore to later access them and display the images through my App.
The v0 is likely just an API versioning scheme, but it is not documented. You should treat the download URL as an opaque value that just provided read-only, public access to the file. It will continue to work for that purpose, unless the token is revoked.

HTTP Request - Delete JSON file key in Firebase with Axios

I have a firebase database setup with an orders.json file created. I already have it setup to where I can post an order to the json file from a javascript project, and then firebase assigns a unique key which looks like "-LKx1RmbvyruM8-5S2mo". I would like to delete a single order based off of that unique identifier key via http request and query params without deleting the entire orders.json file. I'm also using authentication, so my request would look like this:
axios.delete('https://myproject-37b7d.firebaseio.com/orders.json?auth=mytoken')
This would of course delete the entire json file which I do not want. Would I put the unique key in the url query params somehow? Or in the axios config? I figured out how to console log it in a GET request via the .get().then(res => console.log(res.data["-LKx1RmbvyruM8-5S2mo"]))
I tried reading through firebase's api and not many good examples. Any help is much appreciated]1
u
Due to firebase database is a simple json-like object, and single order is an object too, with its own link, I simply make a delete request like that:
removeOrderHandler = (orderId) =>{
axios.delete("/orders/" + orderId +".json?auth=" + this.props.token)

AWS S3: how to list object by tags [duplicate]

We have our bucket with new Aws SDK API on AWS S3. We uploaded and tagged lots of files and folders with tags.
How can we filter on key-value tag, or only one of them? I'd like to find all the objects with key = "temp", or key = "temp" and value = "lol".
Thanks!
I also hoped that AWS will eventually support "search files by tags" because that would open up possibilities like e.g. having a photo storage with the names, descriptions, location stored in tags so I wouldn't need a separate database.
But, apparently AWS explicitly is not supporting this, and will probably never do so. Quoting from their storage service white paper:
Amazon S3 doesn’t suit all storage situations. [...] some storage needs for which you should consider other AWS storage options [...]
Amazon S3 doesn’t offer query capabilities to retrieve specific objects. When you use Amazon S3 you need to know the exact bucket name and key for the files you want to retrieve from the service. Amazon S3 can’t be used as a database or search engine by itself.
Instead, you can pair Amazon S3 with Amazon DynamoDB, Amazon CloudSearch, or Amazon Relational Database Service (Amazon RDS) to index and query metadata about Amazon S3 buckets and objects.
AWS suggests using DynamoDB, RDS or CloudSearch instead.
There seems to be one way to achieve what you're looking for, although it's not ideal, or particularly user-friendly.
The AWS S3 tagging documentation says that you can grant accounts permissions for objects with a given tag. If you created a new account with the right permissions then you could probably get the filtered list.
Not particularly useful on an ongoing basis, though.
AFAIK - Resource Groups don't support tags on an S3 Object level only on a bucket level.
Source: https://aws.amazon.com/blogs/aws/new-aws-resource-tagging-api/ (scroll down the page to the table).
This is now possible using AWS Resource Tagging API and S3 Select (SQL). See this post: https://aws.amazon.com/blogs/architecture/how-to-efficiently-extract-and-query-tagged-resources-using-the-aws-resource-tagging-api-and-s3-select-sql/.
However, the Resource Tagging API supports only tags on buckets for the S3 service, not on objects: New – AWS Resource Tagging API
There's no way to filter/search by tags. But you can implement this yourself using S3.
You can create a special prefix in a bucket, e.g. /tags/. Then for each actual object you add and want to assign a tag (e.g. Department=67), you add a new object in /tags/, e.g: /tags/XXXXXXXXX_YYYYYYYYY_ZZZZZZZZZ, where
XXXXXXXXX = hash('Department')
YYYYYYYYY = hash('67')
ZZZZZZZZZ = actualObjectKey
Then when you want to get all objects that have a particular tag assigned (e.g. Department), you have to execute the ListObjectsV2 S3 API for prefix /tags/XXXXXXXXX_. If you want objects that have particular tag value (e.g. Department=67), you have to execute the ListObjectsV2 S3 API for prefix /tags/XXXXXXXXX_YYYYYYYYY_
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html
It's not that fast but still does the job.
Obvious downside is that you have to remove the tags yourself. For example, you can do all of this above with a S3 triggers and lambda.
You should be able to query tags and values that you added
using resource-groups/query resource:
https://${region}.console.aws.amazon.com/resource-groups/resources
There is many way to get filter list of s3 by tag. I used in my code:
import boto3
from botocore.exceptions import ClientError
def get_tag_value(tags, key):
for tag in tags:
if tag["Key"] == key:
return tag["Value"]
return ""
def filter_s3_by_tag_value(tag_key,tag_value):
s3 = boto3.client('s3')
response = s3.list_buckets()
s3_list=[]
for bucket in response["Buckets"]:
try:
response_tags = s3.get_bucket_tagging(Bucket=bucket["Name"])
if get_tag_value(response_tags["TagSet"],tag_key) == tag_value:
s3_list.append(bucket["Name"])
except ClientError as e:
print(e.response["Error"]["Code"])
return s3_list
def filter_s3_by_tag_key(tag_key):
s3 = boto3.client('s3')
response = s3.list_buckets()
s3_list=[]
for bucket in response["Buckets"]:
try:
response_tags = s3.get_bucket_tagging(Bucket=bucket["Name"])
if get_tag_value(response_tags["TagSet"],tag_key) != "":
s3_list.append(bucket["Name"])
except ClientError as e:
print(e.response["Error"]["Code"])
return s3_list
print(filter_s3_by_tag_value(tag_key,tag_value))
print(filter_s3_by_tag_key(tag_key))
AWS now supports tagging of S3 images.
They have APIs to add/remove tags.
Amazon S3 Select, Amazon Athena can be used to search for S3 resources with TAGS.
Currently the max number of tags per resource is 10 (thanks Kyle Bridenstine for pointing out the correct number).

How to upload using ng-file-upload with AWS Cognito?

I have AWS Cognito installed for my Angular application, and I am trying to hook in the s3.putObject some how into the ng-file-upload Upload service.
I want to use ng-file-upload's Upload.upload() method, but I don't know how to use the existing authenticated connection with Cognito. The idea is to use ng-file-upload's drag and drop and then the upload will use the existing Cognito connection to upload to s3. How can I do this? The reason why I want to use ng-file-upload's Upload.upload() method is to help retain some of the functionality like progress bars (please correct me if this is incorrect).
I assume you are trying to upload to S3 bucket directly from your client in Single Page App. If so, you can start by pre signing S3 URL and then try using it in ng-file-upload. A pre-signed URL allows you to give one-off access to other users who may not have direct access to execute the operations. Pre-signing generates a valid URL signed with your credentials that any user can access. By Default the URL's generated by the SDK expire after 15 minutes.
To generate a simple pre-signed URL that allows any user to put object in a bucket you own, you can use the following call to getSignedUrl():
//Initialize the SDK with Cognito Credentials
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey'};
var url = s3.getSignedUrl('putObject', params);
console.log("Use this url in ng-file-upload", url);
For more information on getting pre signed URL, refer to this wiki http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-examples.html#Amazon_S3__Getting_a_pre-signed_URL_for_a_PUT_operation_with_a_specific_payload and consult ng-file-upload documentation

Categories

Resources