Firebase storage downloadURL containing "v0" in the url string - javascript

After I'm done uploading image files to my Firebase Storage, I'm getting the downloadURL by using the method:
uploadTask.snapshot.ref.getDownloadURL();
And the URL's are coming back with the following format:
https://firebasestorage.googleapis.com/v0/b/MY_FIREBASE_PROJECT.appspot.com/o/blog-post-images%2FFILE_NAME.jpg?alt=media&token=TOKEN_VALUE
QUESTION
What is the v0 inside of my URL? Is this some kind of versioning? Can I be sure that those URLs are stable and won't change ever?
NOTE:
I'm saving those URLs to my Firestore to later access them and display the images through my App.

The v0 is likely just an API versioning scheme, but it is not documented. You should treat the download URL as an opaque value that just provided read-only, public access to the file. It will continue to work for that purpose, unless the token is revoked.

Related

Managing two different Azure resources under the same domain name with domain routing

I am trying to deploy a React app with a simple content management function in Microsoft Azure.
When users access static content on the website, the app simply reads html code from a database, and display it.
Given that the html code itself is static, I need to host files like images as static resources in Azure blob storage. But since I already assigned my custom domain name to the app, I am not able to use the same domain name with blob storage.
How do I integrate blob storage in my app so that when the browser tries to access files hosting under the route i.e. "/assets", that it looks up the path name and file name in the corresponding folder in Azure blob storage?
For example, if the html code wants to access "/assets/img/1.jpg", it will get "/img/1.jpg" from my Azure Blob Storage folder?
• You can integrate azure blob storage in your react app by setting the resource name in ‘src/azure-storage-blob.ts’ as follows: -
‘const storageAccountName = process.env.storageresourcename || "fileuploaddemo";’
• Generate a SAS token with the below parameters: -
Property Value
Allowed services Blob
Allowed resource types Service, Container, Object
Allowed permissions Read, write, delete, list, add, create
Enable deletions of version Checked
Start and expiry date/time Accept the start date/time and set the end date time 24 hours in the future. Your SAS token is only good for 24 hours.
HTTPS only Selected
Preferred routing tier Basic
Signing Key key1 selected
• Set the SAS token in the ‘src/azure-storage-blob.ts’ as follows. Don’t add the ‘?’ in the SAS token value when adding the SAS token in the code.
// remove ? if it is first character of token
const sasToken = process.env.storagesastoken || "SAS token";
• Now, configure CORS to allow the resource to connect to the app with custom domain name and save it. Thus, you will be able to upload the images in the azure blob storage by opening the app URL in the browser locally.
• Once the images are uploaded, they can be accessed and arranged through the storage explorer in a hierarchy and similarly the path of those images can be mapped to the Azure blob storage and accessed through a browser.
Please refer the below links for more information: -
https://learn.microsoft.com/en-us/azure/developer/javascript/tutorial/browser-file-upload-azure-storage-blob
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-custom-domain-name?tabs=azure-portal
Thanking you,

Retrieving a timestamp for an S3 upload

I want to handle synchronizing between browser cache (indexedDB) and S3. Therefore I utilize timestamps.
The tricky part is, that my browser application needs to know the exact "last update" timestamp of the file in S3 to store it alongside the locally cached file (so I can sense differences on the one or other side by timestamps being not equal).
Currently, my best solution is:
// Upload of file
var upload = new AWS.S3.ManagedUpload({
params: {
// some params
}
});
await upload.promise();
// Call of listObjectsV2
var s3Objects = await s3.listObjectsV2(params).promise();
// get "LastModified" value from listObjectsV2
I really dislike this solution as it makes an extra call for "listObjectsV2", that needs time and is charged by AWS.
From the top of my head, I expected there should be something in the return params of the upload, that I can utilize. But I can't find anything. What am I missing?
Looking at the documentation for the AWS SDK for JavaScript, I don't think you're missing anything at all: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3/ManagedUpload.html#promise-property
It is simply not returning any date time field after a successfull upload.
(I've been searching for something like this myself, only for NET. In the end I had to start sending metadata requests after uploading.)
Perhaps listening to S3 events could be an alternative: https://aws.amazon.com/blogs/aws/s3-event-notification/

AWS S3: how to list object by tags [duplicate]

We have our bucket with new Aws SDK API on AWS S3. We uploaded and tagged lots of files and folders with tags.
How can we filter on key-value tag, or only one of them? I'd like to find all the objects with key = "temp", or key = "temp" and value = "lol".
Thanks!
I also hoped that AWS will eventually support "search files by tags" because that would open up possibilities like e.g. having a photo storage with the names, descriptions, location stored in tags so I wouldn't need a separate database.
But, apparently AWS explicitly is not supporting this, and will probably never do so. Quoting from their storage service white paper:
Amazon S3 doesn’t suit all storage situations. [...] some storage needs for which you should consider other AWS storage options [...]
Amazon S3 doesn’t offer query capabilities to retrieve specific objects. When you use Amazon S3 you need to know the exact bucket name and key for the files you want to retrieve from the service. Amazon S3 can’t be used as a database or search engine by itself.
Instead, you can pair Amazon S3 with Amazon DynamoDB, Amazon CloudSearch, or Amazon Relational Database Service (Amazon RDS) to index and query metadata about Amazon S3 buckets and objects.
AWS suggests using DynamoDB, RDS or CloudSearch instead.
There seems to be one way to achieve what you're looking for, although it's not ideal, or particularly user-friendly.
The AWS S3 tagging documentation says that you can grant accounts permissions for objects with a given tag. If you created a new account with the right permissions then you could probably get the filtered list.
Not particularly useful on an ongoing basis, though.
AFAIK - Resource Groups don't support tags on an S3 Object level only on a bucket level.
Source: https://aws.amazon.com/blogs/aws/new-aws-resource-tagging-api/ (scroll down the page to the table).
This is now possible using AWS Resource Tagging API and S3 Select (SQL). See this post: https://aws.amazon.com/blogs/architecture/how-to-efficiently-extract-and-query-tagged-resources-using-the-aws-resource-tagging-api-and-s3-select-sql/.
However, the Resource Tagging API supports only tags on buckets for the S3 service, not on objects: New – AWS Resource Tagging API
There's no way to filter/search by tags. But you can implement this yourself using S3.
You can create a special prefix in a bucket, e.g. /tags/. Then for each actual object you add and want to assign a tag (e.g. Department=67), you add a new object in /tags/, e.g: /tags/XXXXXXXXX_YYYYYYYYY_ZZZZZZZZZ, where
XXXXXXXXX = hash('Department')
YYYYYYYYY = hash('67')
ZZZZZZZZZ = actualObjectKey
Then when you want to get all objects that have a particular tag assigned (e.g. Department), you have to execute the ListObjectsV2 S3 API for prefix /tags/XXXXXXXXX_. If you want objects that have particular tag value (e.g. Department=67), you have to execute the ListObjectsV2 S3 API for prefix /tags/XXXXXXXXX_YYYYYYYYY_
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html
It's not that fast but still does the job.
Obvious downside is that you have to remove the tags yourself. For example, you can do all of this above with a S3 triggers and lambda.
You should be able to query tags and values that you added
using resource-groups/query resource:
https://${region}.console.aws.amazon.com/resource-groups/resources
There is many way to get filter list of s3 by tag. I used in my code:
import boto3
from botocore.exceptions import ClientError
def get_tag_value(tags, key):
for tag in tags:
if tag["Key"] == key:
return tag["Value"]
return ""
def filter_s3_by_tag_value(tag_key,tag_value):
s3 = boto3.client('s3')
response = s3.list_buckets()
s3_list=[]
for bucket in response["Buckets"]:
try:
response_tags = s3.get_bucket_tagging(Bucket=bucket["Name"])
if get_tag_value(response_tags["TagSet"],tag_key) == tag_value:
s3_list.append(bucket["Name"])
except ClientError as e:
print(e.response["Error"]["Code"])
return s3_list
def filter_s3_by_tag_key(tag_key):
s3 = boto3.client('s3')
response = s3.list_buckets()
s3_list=[]
for bucket in response["Buckets"]:
try:
response_tags = s3.get_bucket_tagging(Bucket=bucket["Name"])
if get_tag_value(response_tags["TagSet"],tag_key) != "":
s3_list.append(bucket["Name"])
except ClientError as e:
print(e.response["Error"]["Code"])
return s3_list
print(filter_s3_by_tag_value(tag_key,tag_value))
print(filter_s3_by_tag_key(tag_key))
AWS now supports tagging of S3 images.
They have APIs to add/remove tags.
Amazon S3 Select, Amazon Athena can be used to search for S3 resources with TAGS.
Currently the max number of tags per resource is 10 (thanks Kyle Bridenstine for pointing out the correct number).

Is Firebase storage URL static?

I have a list of contacts and each of those has a profile photo which is stored in Firebase storage. An official way of getting the images would be to fetch the URL using the Firebase storage SDK and set it as src in img element.
firebaseApp.storage().ref("profilePhotos/" + officeId + ".jpg").getDownloadURL().then(function (url) {
this.photoUrl = url;
}.bind(this)).catch(function (error) {
console.log("Photo error"); // TODO: handler
});
This is quite cumbersome when I have to load multiple files (as in contact list). Is the file URL received above static? Can I store it in the database in the profile information and use it directly?
Thanks
A very common pattern is to store the download URL of a file in Realtime Database for easy use later on. Download URLs should work until you choose to revoke them.
In my experience, the download urls are static. Also if you look at the entry in the database below the download url you can see an option to recreate the download url.
Storing the download url in the Realtime Database is a great way to keep track of those download urls. I would use the push method to hold it in a folder in the database.
The way they use .push in the Realtime Database docs example will create a pattern of storage and retrieval that should solve your problem.
.push for making and entry, chained with .key for retrieval later:
var newPostKey = firebase.database().ref().child('posts').push().key;
.once for reading the data at the reference you want with a .then
firebase.database().ref('/users/' + userId).once('value').then(...)

Google OAuth WildCard Domains

I am using the google auth but keep getting an origin mismatch. The project I am working has sub domains that are generated by the user. So for example there can be:
john.example.com
henry.example.com
larry.example.com
In my app settings I have one of my origins being http://*.example.com but I get an origin mismatch. Is there a way to solve this? Btw my code looks like this:
gapi.auth.authorize({
client_id : 'xxxxx.apps.googleusercontent.com',
scope : ['https://www.googleapis.com/auth/plus.me',
state: 'http://henry.example.com',
'https://www.googleapis.com/auth/userinfo.email', 'https://www.googleapis.com/auth/userinfo.profile'],
immediate : false
}, function(result) {
if (result != null) {
gapi.client.load('oath2', 'v2', function() {
console.log(gapi.client);
gapi.client.oauth2.userinfo.get().execute(function(resp) {
console.log(resp);
});
});
}
});
Hooray for useful yet unnecessary workarounds (thanks for complicating yourself into a corner Google)....
I was using Google Drive using the javascript api to open up the file picker, retrieve the file info/url and then download it using curl to my server. Once I finally realized that all my wildcard domains would have to be registered, I about had a stroke.
What I do now is the following (this is my use case, cater it to yours as you need to)
On the page that you are on, create an onclick event to open up a new window in a specific domain (https://googledrive.example.com/oauth/index.php?unique_token={some unique token}).
On the new popup I did all my google drive authentication, had a button to click which opened the file picker, then retrieved at least the metadata that I needed from the file. Then I stored the token (primary key), access_token, downloadurl and filename in my database (MySQL).
Back on step one's page, I created a setTimeout() loop that would run an ajax call every second with that same unique_token to check when it had been entered in the database. Once it finds it, I kill the loop and then retrieve the contents and do with them as I will (in this case I uploaded them through a separate upload script that uses curl to fetch the file).
This is obviously not the best method for handling this, but it's better than entering each and every subdomain into googles cloud console. I bet you can probably do this with googles server side oauth libraries they use, but my use case was a little complicated and I was cranky cause I was frustrated at the past 4 days I've spent on a silly little integration with google.
Wildcard origins are not supported, same for redirect URIs.
The fact that you can register a wildcard origin is a bug.
You can use the state parameter, but be very careful with that, make sure you don't create an open redirector (an endpoint that can redirect to any arbitrary URL).

Categories

Resources