Count of JS S3 Object Access - javascript

In my app, my customers can create small widgets with text fields and text. They can customize the look and feel through JS and CSS. I upload the JS and CSS in my S3 bucket and use Cloudfront for CDN.
Once the widget is created, they can embed the widget on their website using embed code.
In the embed code, I have used a 1x1 pixel image which is used to send request to php so I can increase the visit counter.
public function track(Request $request) {
// increase stored number here
header('Content-Type: image/gif');
return base64_decode('R0lGODlhAQABAJAAAP8AAAAAACH5BAUQAAAALAAAAAABAAEAAAICBAEAOw==');
}
My server is getting overload because of the visit counter. I want to now track the number of visits to embedded widgets for each customer by using S3 and Cloudfront access count.
I searched and found - Getting the download count of a specific S3 object, but it is for download.
How can I get the count of times when S3 object is accessed?

Use cloudtrail to and parse the logs. List of actions trackable by cloudtrail on s3

You can use URL shortener service to redirect to your file (like goo.gl, bit.ly), they will track number of clicks,views etc, and it wont even overload your server.
Hope it helps.

Related

How to generate ASW s3 presigned URl from php and then upload multiple files in that url from javascript?

I want to upload multiple files from browser with axios into s3 bucket. I am using aws-sdk-php library in laravel. My intention is when I want to upload one or multiple files,
I will send a GET request to backend(laravel) for a presigned URL.
After receiving the URL I will make a PUT/POST request to that URL with all the files.
I have read the aws-sdk-php docs and found some problems with my intention.
I have seen that I need to give a Object Key when I want to generate a presigned URL. In the docs and other articles they are using file_name as Key. But for my purpose I cannot send multiple file_names since it doesn't make sense.
Then I thought of generating a UUID and use it as Key. But then how would I access my files individually later? I didn't get any reference to that part.
Can anyone help me with these problem?

How can I check s3 file uploaded or not using javascript

I'm implementing a direct pdf file upload from client machine to Amazon S3 via REST API using only Go langangue, All works fine but one thing is worrying me...
Here is the steps
User click on pdf button
New browser tab is open there is in html page(which says generating
your report)
On background pdf file is uploading(in process) on s3. And API return s3
url to client.
Problem
how can I check if the URL is active yet or not. If it's a 404 it doesn't redirect… waits another N seconds. Once it's a 200, then I redirect to s3 url.
How can I achieve this on javascript ?
AWS S3 ensures GET after PUT consistency for new objects. From https://aws.amazon.com/s3/faqs/
"
Q: What data consistency model does Amazon S3 employ?
Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.
"
This ensures that once the upload is done, your object will be reachable. Now, with JS you can issue an Ajax request only if you're on same domain or you enable CORS on your S3 bucket. This is explained here: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html and it will allow you to check your object on S3 being there.
Otherwise, you would need a server-side component to check if the object is uploaded and call that resource from JS on the same domain.

Understanding Firebase Storage tokens

I'm trying to understand how tokens work in Firebase Storage.
Whenever my web app uploads an image to FS it adds a token to its public url. The problem is whenever you upload that same image file to another part of the web app, it seems like you don't get another file, but a different token for the file url that was already uploaded, thus rendering a 403 error for the previous registered image display.
Is there a way to solve this?
Example:
storageRef.put(picture.jpg);
uploadTask.snapshot.downloadURL
// returns something like https://firebasestorage.googleapis.com/v0/b/<your-app>/o/picture.jpg?alt=media&token=09cb2927-4706-4e36-95ae-2515c68b0d6e
That url is then displayed somewhere inside an img src.
<img src="https://firebasestorage.googleapis.com/v0/b/<your-app>/o/picture.jpg?alt=media&token=09cb2927-4706-4e36-95ae-2515c68b0d6e">
If the user repeats the process and uploads the same picture.jpg in another section of the app, instead of getting a brand new copy in Firebase Storage, the file is overwritten with an URL ending with a new token; say 12345.
So:
<img src="https://...picture.jpg?alt=media&token=12345"> // New upload renders fine
<img src="https://...picture.jpg?alt=media&token=09cb2927-4706..."> // But old upload breaks because of wrong url
Tokens are unique for a particular version of an upload. If you overwrite the file with new content, then a new token will be generated with a new unguessable url.
So in other words, tokens are unique for a particular blob -- they are not unique per storage location. We did this as an increased measure of security to ensure that developers and end users did not accidentally expose data they did not intend.
You can, however, translate the storage location ("gs://mybucket/myfile.png") into a download url using our js SDK. That way, you can pass around the gs uri if you wish and translate it to a full URL once you want to place it into an image.
See: https://firebase.google.com/docs/reference/js/firebase.storage.Reference.html#getDownloadURL
For public file upload: If you upload files in firebase functions you'll need to call makePublic() on the reference object in order to make it accessible without having a valid token.

Upload files asynchronously then save data about it

I am building a way for users to upload tracks with information about that track but I would like to do this asynchronously much like YouTube does.
At the moment there is an API endpoint of tracks that accepts a POST request with the uploaded file and all the meta data. It processes the track, validates everything and will then save the path to the track and all of its meta data in the database. This works perfectly but I am having trouble thinking of ways to do this asynchronously.
The user flow will be:
1) User selects a track and it starts uploading
2) A form to fill in meta data shows and user fills it in
3) Track is uploaded with its metadata to the endpoint
The problem is that the metadata form and the file upload are now two separate entities and now the file can finish uploading before the metadata is saved and vice-versa. Ideally to overcome this both the track and metadata would be saved in the browser as a cookie or something until they are both completed. At that point both would be sent to the endpoint and no changes would be required at the back end. As far as I am aware there is no way of saving files client side like this. Oh apart from that filesystem API which is pretty much deprecated.
If anyone has any good suggestions about how to do this it would be much appreciated. In a perfect world I would like there to be no changes to the back end at all but little changes are probably going to be required. Preferably no database alterations though.
Oh by the way I'm using laravel and ember.js just in case anyone knows of any packages already doing this.
I have thought about this a lot few months ago.
The closest solution that I managed to put together is to upload file and store it's filename, size, upload time (this is crucial) and other attributes in DB (as usual). Additionally, I've added the column temporary (more like a flag) which would initially be set to TRUE and only after you would sent meta data it would be negated.
Separately, I've set the cron job (I used Symfony2, but in Laravel is all the same) that would run on every 15-30 minutes and delete those files (and corresponding database records) which had temporary = TRUE and exceeded time window. In my case it was 15 minutes but you could set it to be coarse (every hour or so).
Hope this helps a bit :)

How to get Amazon S3 Response Headers using Dropzone JS?

I am developing a new website where users can upload files to an Amazon S3 bucket. After evaluating different upload libraries for jQuery I finally chose Dropzone JS.
I was able to integrate Dropzone into my application in order to upload files directly to an Amazon S3 bucket. Everything is working fine with the upload.
However I am having troubles reading the response from Amazon using jQuery. In particular, I'd like to get the Location header that comes as a response from Amazon. This location header has the information I need to process the uploaded file, but I am unable to get it using Dropzone. Anyone can advice how to get the XHR response headers? Checking the code I don't think this is possible, seems we can only get the response text but not the headers.
see How to get read data from response header in jquery/javascript for obtaining the response info.
Assuming you are using the AWS POST operation http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html
I suspect that the URI it returns is the one with amazon's domain: https://s3.amazonaws.com/Bucket/Object
if you are using a "web" bucket and want to use your custom domain you will have to figure that out for yourself. You already have the bucket name since you provided it in the call.
Another wrinkle could be the permissions of the file after upload. Be sure to set a policy on the paths for the uploads appropriately.
According to the creator of dropzone, The XHR object is stored in the file itself as file.xhr. So if you want to access its parameters, you would have to do console.log(file.xhr."what you want to access")
I suggest you console.log(file.xhr) to see its contents first. It would give you an idea of the values that are available.
However, the Response headers are "unsafe" and can not be viewed except you add a CORS policy to your bucket that marks them as safe.
So if you want to access the Location header for example, you would need to add
<ExposeHeader>location</ExposeHeader>
to your CORS policy.
Then you can now access it like so
console.log(file.xhr.getResponseHeader("Location"));
Sorry to resurrect an old thread

Categories

Resources