Download files from authenticated API with javascript - javascript

I have found several threads where the same question has been asked, but I suspect that the top answer in most of them are outdated.
My problem
I have a frontend javascript app communicating with an oauth authenticated API. This API contains files I want my users to be able to download. Because the API requires authentication, I can not show the user a regular link in order to initiate the download.
Instead, I have to send a XHR-request to initiate the download (so I can add the necessary authentication header).
In my case, the files will usually be pretty large (>1GB), so keeping them in memory is not a solution.
Reading this article, I'm wondering if it might be possible to stream the file from the API to the filesystem through the Javascript file API. Does anyone have a suggestion on how I might make this work?
Isn't this a pretty common problem in 2016?

It is somewhat hack-ish, but I've used it before and it works wonders.
From Downloading file from ajax result using blob

Related

How to download Image from browser & upload to Amazon S3

How can I allow the client to upload an image in the browser, and then I upload it to Amazon S3? I have been looking around a lot and have found no resources explaining how to do this.
Are there any tutorials that I could follow?
Are there any libraries that I should use for this?
I am using AngularJS on the frontend and Node.js on the backend.
In short, look for two different tutorials. One for uploading from a client to a server, one for uploading from a server to S3.
StackOverflow discourages linking to specific tutorials, but there are lots of them out there, so it shouldn't be too tricky to track down.
For the client-to-server, you'll want to do a basic HTML form upload up to the server, then snag the data. You can temporarily write it to your file system (if you're on Linux, the /tmp directory is a good place to stash it).
After that, just upload from your server to S3. Amazon itself has some good documentation on that. The s3 package for Node also has good examples: https://www.npmjs.com/package/s3
It's also possible to go straight from the browser to S3, which may be better depending on your use case. Check this out: http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-example-photo-album.html
You're going to need the AWS SDK for node. Then they have a pretty comprehensive developper guide. You may have to read up on credential management, too.
the procedure would be as follows
user uploads the image from browser to your server (I'd recomend plain form upload, unless you feel ok with uploading via ajax)
then your server uses the SDK to save to S3
you display back info to the user (link to the image, upload status ?).
you could also use pre-signed posts but that seems more advanced and I haven't seen info on it for Node.

How to make sure a request is sent from original software?

I'm currently making an open source browser extension that will send requests to my site. This can easily be done with Ajax, a request will be sent to the page action.php.
My site will use PHP, well now the question is, how can I make sure action.php receives the request from the original extension? I mean griefers could easily send false information to the server, or a fork could be used and send incorrect data. I thought of generating a token of some sort, but anyone could recreate it I guess.
How can I prevent this situation?
I have some experience with this myself. I've been building an extension with a login and eventually came to the inevitability that security in an extension is inherently difficult.
The issue is that an extension is just a bundle of JS and HTML that anyone can inspect the values of. This means that anyone determined enough to dig through your code can potentially find out how to bypass anything you have built in.
The solution I eventually came to is that, the extension itself cannot hold any long-lasting secrets. A session with a timeout is the only safe thing to store. The actual login for my extension is done via a website over HTTPS.
If you are trying to do this without any such login, your only recourse is to make it as difficult as possible to determine what needs to be sent by using an algorithm that can generate server verifiable tokens, and then only publishing minified code to the webstore.
EDIT: Reread the question and noticed that you said you are doing this open source. Without some sort of authentication on the webserver via HTTPS, there is little you can do to stop those determined to bypass your protections because they will be on display in your public repository.
For sensitive endpoints like this, it would make sense do to the data processing server-side. The client would only have to query the server to process the data.

Upload from Client Browser to Google Cloud Storage Using JavaScript

I am using Google Cloud Storage. To upload to cloud storage I have looked at different methods. The method I find most common is that the file is sent to the server, and from there it is sent to Google Cloud storage.
I want to move the file directly from the user's web browser to Google Cloud Storage. I can't find any tutorials related to this. I have read through the Google API Client SDK for JavaScript.
Going through the Google API reference, it states that files can be transferred using a HTTP request. But I am confused about how to do it using the API client library for JavaScript.
People here would require to share some code. But I haven't written any code, I have failed in finding a method to do the job.
EDIT 1: Untested Sample Code
So I got really interested in this, and had a few minutes to throw some code together. I decided to build a tiny Express server to get the access token, but still do the upload from the client. I used fetch to do the upload instead of the client library.
I don't have a Google cloud account, and thus have not tested this, so I can't confirm that it works, but I can't see why it shouldn't. Code is on my GitHub here.
Please read through it and make the necessary changes before attempting to run it. Most notably, you need to specify the location of the private key file, as well as ensure that it's there, and you need to set the bucket name in index.html.
End of edit 1
Disclaimer: I've only ever used the Node.js Google client library for sending emails, but I think I have a basic grasp of Google's APIs.
In order to use any Google service, we need access tokens to verify our identity; however, since we are looking to allow any user to upload to our own Cloud Storage bucket, we do not need to go through the standard OAuth process.
Google provides what they call a service account, which is an account that we use to identify instances of our own apps accessing our own resources. Whereas in a standard OAuth process we'd need to identify our app to the service, have the user consent to using our app (and thus grant us permission), get an access token for that specific user, and then make requests to the service; with a service account, we can skip the user consent process, since we are, in a sense, our own user. Using a service account enables us to simply use our credentials generated from the Google API console to generate a JWT (JSON web token), which we then use to get an access token, which we use to make requests to the cloud storage service. See here for Google's guide on this process.
In the past, I've used packages like this one to generate JWT's, but I couldn't find any client libraries for encoding JWT's; mostly because they are generated almost exclusively on servers. However, I found this tutorial, which, at a cursory glance, seems sufficient enough to write our own encoding algorithm.
I'd like to point out here that opening an app to allow the public free access to your Google resources may prove detrimental to you or your organization in the future, as I'm sure you've considered. This is a major security risk, which is why all the tutorials you've seen so far have implemented two consecutive uploads.
If it were me, I would at least do the first part of the authentication process on my server: when the user is ready to upload, I would send a request to my server to generate the access token for Google services using my service account's credentials, and then I would send each user a new access token that my server generated. This way, I have an added layer of security between the outside world and my Google account, as the burden of the authentication lies with my server, and only the uploading gets done by the client.
Anyways, once we have the access token, we can utilize the CORS feature that Google provides to upload files to our bucket. This feature allows us to use standard XHR 2 requests to use Google's services, and is essentially designed to be used in place of the JavaScript client library. I would prefer to use the CORS feature over the client library only because I think it's a little more straightforward, and slightly more flexible in its implementation. (I haven't tested this, but I think fetch would work here just as well as XHR 2.).
From here, we'd need to get the file from the user, as well as any information we want from them regarding the file (read: file name), and then make a POST request to https://www.googleapis.com/upload/storage/v1/b/<BUCKET_NAME_HERE>/o (replacing with the name of your bucket, of course) with the access token added to the URL as per the Making authenticated requests section of the CORS feature page and whatever other parameters in the body/query string that you wish to include, as per the Cloud Storage API documentation on inserting an object. An API listing for the Cloud Storage service can be found here for reference.
As I've never done this before, and I don't have the ability to test this out, I don't have any sample code to include with my answer, but I hope that my post is clear enough that putting together the code should be relatively straightforward from here.
Just to set the record straight, I've always found OAuth to be pretty confusing, and have generally shied away from playing with it due to my fear of its unknowns. However, I think I've finally mastered it, especially after this post, so I can't wait to get a free hour to play around with it.
Please let me know if anything I said is not clear or coherent.

Can I call a routine or a function automatically when a File is accessed?

Pardon me if I am asking something really stupid. But this is what I want to implement as per my new role as a analytic Implementer. Some of our files (Mostly pdfs) are stored on a webserver (CDN server) to reduce some load of the application server.
We provide links to these file to all our users across the world. What I want is to track these file download whenever they occur. So I just wanted to know is there any way by which I can call a function or a routine from where I can make those tracking calls ?
Not really.
If you are using a 3rd party web hosting as CDN, then you could simply get the Analytics reports using whatever tool your host offers.
If you are running your own hosting box, you could install almost any analytics software on it to monitor access. Just one example is provided here: http://ruslany.net/2011/05/using-piwik-real-time-web-analytics-on-iis/
The clean simple way, however, would be to have a simple web application running on that CDN server that accepts file requests and then returns the file. The advantages are that you could:
record whatever statistics you wish off it.
use widely available tools like Google Analytics
make dynamic decisions, one example of which is deciding version of file sent based on factors like user bandwidth, etc.
transparently handle missing files and path changes, so links will be valid forever
send different caching headers for different files
implement very simple access control and policy based restrictions

Pure Javascript app + Amazon S3?

I'm looking to confirm or refute the following:
For what I have read so far it is not possible to write a web application with only javascript -- no server side logic -- served from Amazon S3 that also store data only to S3 if you need to have multiple clients with private data per client.
The issue I see is the Authorization header required for every Ajax call that would force me to put the signature (and my AWS id) right there in the page source for everybody to see.
Is that correct or I misunderstood the docs?
Are there workarounds?
In short, you are correct.
If your AWS key ends-up in any way on the client-side, you are in trouble.
A possible solution is, of course, to have the user specify their AWS key for storing their data.
I'm working on a project that will do something similar to this, mine will have the users use their own S3 which I will store in HTML5 localStorage. It's a bit tricky, but I've got the basics working.
It involves making an Javascript program that replicates itself into S3, gets itself from S3 and then transfers credentials and control into the S3 loaded version.
I'm using the excellent SJCL to do signature generation and jQuery's ajax functionality for the parts I can.
My work simply initializes the S3 side application and does a test PUT/GET sequence to S3. I also rewrote a JQuery postMessage plugin (which StackOverflow won't let me post for lack of rep) for communicating between my frames.
In my case, I'm trying to fit the entire application into a single HTML file so that I don't have to do as much initial transfer into S3, but perhaps there are other ways to work this out.
iBeans offers a way around this without having to write any server side code. There's an S3 iBean (a developer is working on it to be released in the next few days - watch the mulesoft blog for an announcement) and you can access it right from your javascript. The iBean itself runs on a server so you wouldn't need to store your keys in the javascript.

Categories

Resources