Fine uploader S3 upload without additional policy sign request - javascript

Is there a way to use Fine Uploader to upload to an Amazon S3 bucket by providing the already signed policy document along with the key and the other credentials all at once by overriding the policy post request with our own XML api call?
Our company API returns all the credentials including signed policy for the file in one response and is already well established so setting up a signing page is not an option.

This may work for non-chunked uploads since Fine Uploader will only make one request for the signed policy document. However, when uploading files in chunks, the S3 REST API must be used. In that case, a policy document is not used. Instead, a long string of relevant headers for each request must be signed. This signature is then included with the REST call. The headers change with each request, therefore requiring a new signature.
If you want to support chunking and concurrent chunking to S3, you'll need to ensure each request is signed separately via a signature server, or make use of an identity provider to handle this client-side, as demonstrated in our documentation at http://docs.fineuploader.com/branch/master/features/no-server-uploads.html.

Related

API request to a local client page

Could you please advise on the following?
Let's assume I have a local html page stored on my local drive "c:\test.html".
If I open it in a browser, it's treated as a GET request to this page, right?
Is it possible to send, for example, POST request to the same local page, with "fetch"?
And inside "c:\test.html" to check if it was requested with POST method, return something?
It would be something like a local-PC API.
Static HTML pages do not have any request capabilities. What's actually happening here is that there is some sort of server that takes your request and responds with the HTML document. You would need to host your own server of some sort that could take and respond to requests. For this, libraries like express.js work well.
Edit: If you are accessing it through a file:// url, your browser is just reading the file off your drive directly, so you would need some sort of localhosted server.
This is not how it works. When you open a file with your browser, it uses a file protocol, not a HTTP protocol. Look at the URL. You'll see what kind of protocol was used to retrieve the resource.
So no, you cannot sent a fetch request to a local file. You have to establish a proper sever in your localhost and let it handle requests. Local files do NOT handle requests. Servers do.

AWS S3 authenticated user access using presigned URLs?

I would like to host files on a private AWS S3 bucket which can only be accessed by users who are authenticated to my web application. The links to these file downloads must be static.
Simple proxy method:
I know this could be done using a proxy service. In this case the static links would point to the service and the service would handle validating the requesting users session, if it were valid the service would respond with the file contents from S3.
Presigned URL proxy method:
However rather than implement a proxy to gate access to the files, I was wondering if I could use presigned URLs somehow instead?
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
In this case, the role of the proxy is to just return a presigned URL to the user rather than the actual payload of the file from S3. The end user could then use this presigned URL to download the file directly from S3. What I'm not clear on is how this flow is manage in the browser, I am assuming I would need to write JavaScript to the following:
Request presigned URL from proxy service
Wait for response
Use the presigned URL provided in the response (the presigned URL) to download the actual file
Am I on the right track here?
Simply return a 307 redirect from your server to the presigned URL. E.g. the client requests:
GET /the/file HTTP/1.1
And the server generates a presigned URL and responds with:
HTTP/1.1 307 Temporary Redirect
Location: https://s3.aws..../the/file?...
That's a valid approach.
Beware of expiring credentials. Signed URLs will be good for the lesser of the time until the access credentials used to sign them expire, or their expiry time (which you control, within limits) happens. In the case that you're already using temporary credentials (which is very good!) you might want to use AssumeRole explicitly to control the expiry time (you can assume a role from a role to get new temporary credentials with a new time limit).
There's another option too: Amazon Cognito. This can bridge the gap between your user accounts and then issue per-user short-term credentials to your users' browser environments directly. They can then make API calls to S3 with their own credentials. This has some benefit (you can better express user permissions in their profile, rather than checking them yourself before they generate URLs ) and some complexity (can I DoS your account with my user creds, or do you control what APIs I can call? Least Privilege really matters when IAM your only auth tier) On the other hand, IAM calls are free and you don't pay for the servers to host them, so this alo sounds cost effective if you are using federated identity - user pools, not so much.

Resumeable upload from client JavaScript?

I'm trying to understand if there currently is any way to do resumeable uploads (for example to a Google Cloud Storage bucket) from a web client. Looking at FileReader it does not look possible (for big files). Do I miss something?
https://developer.mozilla.org/en-US/docs/Web/API/FileReader
You may want to check the Cloud Storage official documentation for resumable uploads, either for the JSON API or the XML API. You'll basically need to request a resumable session URI to Storage in a first HTTP request and actually upload the file to that URI in a second request, via jQuery's ajax method for example.
You'll see that you'll need to authenticate your request via a bearer token when requesting the resumable session URI. As explained in this SO answer:
You'll either need to have your customers use their own Google credentials (unusual, but makes sense for a third party tool for managing someone else's Google Cloud resources) or use some form of signed URL or similar feature.
I did not understand the documentation. There is a "slice" method that can be used here, but it is on the File object. See for example "Reading local files in JavaScript - HTML5 Rocks", https://www.html5rocks.com/en/tutorials/file/dndfiles/

Onedrive cors download in javascript

I'm trying to process onedrive files in client-side javascript, but first I need a way to use XMLHttpRequest to download the file. Onedrive supports cors for a lot of operations, but for downloading the file into javascript there is the following problem:
As mentioned here: onedrive rest api manual
I can send a request to:
GET https://apis.live.net/v5.0/FILE_ID/content?access_token=ACCESS_TOKEN
and it will reply with a location header redirecting the browser to the file. The problem is when I send these requests through XHR, the browser always sends the Origin header with the request. For the first request I described above, onedrive also replies with an Access-Control-Allow-Origin:* header, so the request is allowed in the browser. However, when the browser is redirected to the actual location of the file, that resource does not have the Access-Control-Allow-Origin header, so the XHR request is denied by the browser(chrome sends an Origin header set to null for the redirect request).
I've also tried getting the location but not redirecting automatically, and then sending another XHR request, this will set the origin header to the domain of my site, but the result is the same.
As I mentioned in the beginning, I need to process the data in javascript, so I'm not asking about how to download onedrive files to hard drive. I need the data to be accessible by javascript in the webpage.
I know that I can use server side programming to get the file data for me and then send it to the client, but for my application this is not an option(at least this is not what I'm asking for at the moment).
If there is no way to do this, does anyone have an idea why they would implement their api this way? To allow javascript to get the location through cors and redirect but not include a cors header for the redirected resource. Why not just deny cors in the first place? Is this a bug?
The answer, as best as I can tell, is that downloading content cannot be done purely by JavaScript in a browser. Why did they do it this way? You'd have to ask them, but I would guess either a bug, or some unspecified "security concerns". For what it's worth, they seem to think that downloading content is CORS compliant in the documentation here: https://dev.onedrive.com/misc/working-with-cors.htm:
To download files from OneDrive in a JavaScript app you cannot use the
/content API, since this responds with a 302 redirect. A 302 redirect
is explicitly prohibited when a CORS preflight is required, such as
when providing the Authorization header.
Instead, your app needs to select the #content.downloadUrl property,
which returns the same URL that /content would have redirected to.
This URL can then be requested directly using XMLHttpRequest. Because
these URLs are pre-authenticated they can be retrieved without a CORS
preflight request.
However, to the best of my knowledge, they are wrong. Just because you don't need a preflight request doesn't mean that the response is CORS-compliant. You still need an Access-Control-Allow-Origin header on the response.
For anyone wondering, this is still an issue in the new Graph API (which is essentially a proxy API to the OneDrive API, as I understand it). The same basic issue is still present - you can get a download URL from your items, but that URL points to a non-CORS-compliant resource, so it doesn't do you a whole lot of good.
I have an active issue open with Microsoft here about this issue. There has been some response to my issue (I got them to expose the download URL through the graph API), but I'm still waiting to see if they'll come up with a real solution to downloading content from JavaScript.
If I get a solution or real answer on that issue, I'll try to report back here so others in the future can have a real answer to reference.
This is not an answer, I cannot comment yet.
Last week the new API for onedrive was released. http://onedrive.github.io/index.htm
Unfortunately it will not solve the problem.
https://api.onedrive.com/v1.0/drive/root:{path and name}:/content?access_token={token}
Will still redirect to a ressource somewhere at https://X.files.1drv.com/.X.
Which will not contain any Access-Control-Allow-Origin headers. Same goes for the Url "#content.downloadUrl" in the JSON response.
I hope that microsoft will address this problem in the very near future, because the API is at the moment of very limited use, since you cannot process file contents from onedrive with html5 apps. Apart from the usual file browser.
The only solution, which I see at the moment would be a chrome app, which can process the Url without CORS. see https://developer.chrome.com/apps/angular_framework
Box does exactly the same thing for download requests. I have not found any way around this problem without involving a server because the browser will not let your program get access to the contents of the 302 redirect response. For security reasons I am not convinced of, browsers mandatorily follow redirect requests without allowing user intervention.
The way we finally worked around this was
the browser app sends the GET request to the server which forwards it to the cloud provider (box/ondrive).
server then DOES NOT follow the 302 redirect response from Box or OneDrive
The server instead returns to the browser app, the content of the location field in the 302 response header, which contains the download url
The javascript in the browser app then downloads the file using the url.
You can now just use the "#content.downloadUrl" property of the item in your GET request. Then there is no redirection.
From https://dev.onedrive.com/items/download.htm:
Returns a 302 Found response redirecting to a pre-authenticated download URL for the file. This is the same URL available through the #content.downloadUrl property on an item.

Using FineUploader with optional https?

I have setup FineUploader on a site and I included a check box that allows users to upload files using HTTPS if the want to.
Unfortunately if the user accesses the site using http and then tries to use ssl it errors out, I assume because of CORS issues. I assume it is a CORS issue because if I access the site using https and try to upload using ssl it works fine.
I found some documentation about enabling CORS support, but it appears that you either need to make it so only CORS requests will be made or none will be made. In my situation there will be CORS request some times and not others.
Does anyone know of a good work around for this? Should I just reload the entire page using HTTPS when the checkbox is clicked?
If you're uploading straight to Amazon s3, see the note in the official docs, "SSL is also supported, in which case your endpoint address must start with https://" in the script within your uploaderpage.html file.
request: {
endpoint: 'https://mybucket.s3.amazonaws.com',
// Note that "https://" added before bucket name, to work over https
accessKey: 'AKIAblablabla' // as per the specific IAM account
},
This will still work if uploaderpage.html is served over http (or you could populate the endpoint value dynamically if you need flexibility re endpoint).
This will help you avoid the mixed content error when uploading over https, "requested an insecure XMLHttpRequest endpoint", which happens if the page is https but you request a http endpoint.
Just to reiterate what I've mentioned in my comments (so others can easily see this)...
Perhaps you can instantiate Fine Uploader after the user selects HTTP or HTTPS as a protocol for uploads. If you must, you can enabled the CORS feature via the expected property of the cors option. Keep in mind that there are some details server-side you must address when handling CORS requests, especially if IE9 or older is involved. Please see my blog post on the CORS feature for more details.

Categories

Resources