Im trying to add an image uploader on my website. I have a javascript with jcrop (the scrip is not mine, I used the one I found on the Internet); this script takes image file from computer crops it and passes to php. It starts with converting the form output to javascrip file.
var oFile = $('#image_file')[0].files[0];
This is a key line for the script, everything else is derived from oFile. I want to make it avaliable for users to upload pictures by adding a weblink (i already made possible saving image files on the localhost be posting a link); How can I open the image from my localhost with javascript and put it in the same format as oFile variable in the line above, so that my script can work with it?
You can use binary Ajax in modern browsers to fetch a resource as a file:
var oReq = new XMLHttpRequest();
oReq.open("GET", fileURLGivenByUser, true);
oReq.responseType = "blob";
oReq.onload = function(oEvent) {
var oFile = oReq.response;
processTheFileSomehow(oFile);
};
oReq.send();
This will work as long as either:
fileURLGivenByUser is on the same domain as the page running the script (e.g., your cropping script runs on foo.com and the image link is also on foo.com), or
A target image resource is served with the Access-Control-Allow-Origin: * CORS response header (e.g., your cropping script runs on foo.com, the image link is on bar.com, and bar.com serves the file with acceptable CORS headers)
So, for your user to use a localhost link, the user must be running a local Web server, and that Web server must serve its images with Access-Control-Allow-Origin: * or Access-Control-Allow-Origin: whateverdomainyouuse.com (i.e., whatever domain your cropping script runs on).
If the CORS restrictions are too restrictive (they likely are), you can use a server-side proxy on your host. For example, when you request
http://mydomain.com/proxy/http://targetdomain.com/image.png
the server does a request and responses with the contents of
http://targetdomain.com/image.png
Related
I'm using Firefox console to run some JS from a github page (https://github.com/user/repo/pull/1/files specifically).
I'd like to be able to load the content of a file of the repo, and for that I'm using
var client = new XMLHttpRequest();
client.open("GET", "https://github.com/user/repo/raw/master/path-to-file", true);
client.onreadystatechange = function() {
alert(client.responseText + client.responseURL);
}
client.send();
The problem is that the content actually is at https://raw.githubusercontent.com/user/repo/master/path-to-file and the url I'm loading (which is the one the Raw button points to in the github webpage) is redirecting to that one.
Firefox errors in the console with:
Content Security Policy: The page’s settings blocked the loading of a resource at
https://raw.githubusercontent.com/user/repo/master/path-to-file
(“connect-src https://github.com https://uploads.github.com
https://status.github.com https://collector.githubapp.com
https://api.github.com https://www.google-analytics.com
https://github-cloud.s3.amazonaws.com wss://live.github.com”).
Is there a way to load that file in some way?
I think I need to run the request from the github page so that a file is accessible even if it's in a private repo (since the session of the user is used when sending the request).
I'm sending a request to a page that it's in the CSP, and that page does the redirect, so I don't see why the CSP should block me as I didn't try to access an "unauthorized" resource.
The alternative would be to load the github web page at https://github.com/user/repo/blob/master/path-to-file, but that means I'd have to parse it to remove all the displaying tags, and it's brittle as every change to github might break the script.
I'm building an app in witch the user see a set of downsized images and than press " ok" for the app to download all of the original files, put them into a zip file and send the zip file.
the app is using polymer, polymerfire, firebase (including the storage).
during the upload of the images i save in the database both the download url and the storage reference for both the original file and the downsized one.
when i put the download url in the iron-image element to show the images in the browser everything works perfectly, the downsized images are shown on the screen.
When i try to download the fullsize images via XMLHttpRequest() i get the Cors error.
I can't understand why, both request are coming from the same app, why two different cors response?
here is the code for the XMLHttpRequest() (mostly copied from the firebase documentation):
for (var z = 0; z < visita.immagini.length; z++) {
var immagine =visita.immagini[z]
var storage = firebase.storage();
var pathReference = storage.ref('immagini/'+ immagine.original.ref);
pathReference.getDownloadURL().then(function(url) {
var xhr = new XMLHttpRequest();
xhr.responseType = 'blob';
xhr.onload = function(event) {
var blob = xhr.response;
console.log(blob);
};
xhr.open('GET', url);
xhr.send();
}).catch(function(error) {
console.log(error);
});
}
and here is the error response:
XMLHttpRequest cannot load ***** [image link]******. No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:3000' is therefore not allowed
access.
note that if i copy the ***** [image link]****** and put in another tab of the browser i can see without problems.
I finally found some information on CORS + storage as asked. Check out the firebase docs on storage here: https://firebase.google.com/docs/storage/web/download-files#cors_configuration.
Firstly, you will need gsutil (https://cloud.google.com/storage/docs/gsutil_install).
Then make a file named cors.json somewhere in your project with the following content:
[
{
"origin": ["*"],
"method": ["GET"],
"maxAgeSeconds": 3600
}
]
Finally run:
gsutil cors set cors.json gs://<your-cloud-storage-bucket>
These steps worked for me!
This is also answered here: Firebase Storage and Access-Control-Allow-Origin, which I found after answering.
The section on headers in the Firebase “Deployment Configuration” docs says that to enable cross-origin requests for images, you must add to your firebase.json something like this:
"headers": [ {
"source" : "**/*.#(jpg|jpeg|gif|png)",
"headers" : [ {
"key" : "Access-Control-Allow-Origin",
"value" : "*"
} ]
} ]
when i put the download url in the iron-image element to show the
images in the browser everything works perfectly, … When i try to
download the fullsize images via XMLHttpRequest() i get the Cors
error. I can't understand why, both request are coming from the same
app, why two different cors response?
Because browsers block cross-origin XHR requests unless the server receiving the requests uses CORS to allow them, by responding with an Access-Control-Allow-Origin: * header.
note that if i copy the ***** [image link]****** and put in another
tab of the browser i can see without problems.
That’s expected. When you put a URL into your browser’s address bar, it’s not a cross-origin request—instead it’s just you navigating directly to a URL.
But when you put that URL into the JavaScript for a Web application running at some origin on the Web, then when that request is sent, it’s not you navigating directly to the URL but instead it’s some Web application making a cross-origin request to another Web site.
So browsers by default block such cross-origin requests from frontend JavaScript code. But to opt-in to receiving such requests, a site can include the Access-Control-Allow-Origin header in its response to the browser. If the browser sees that header, it won’t block the request.
For more details, see the HTTP access control (CORS) article at MDN.
I develop a JavaScript application which uses oauth2 authentication. Now I want to load/show an image from server which is behind this authentication mechanism. Therefor I send an xmlHttp-request to the rest-server and get the URI of the image as response.
After the request I try to append the URI to the src of an image and the application responds with 401.
How can I tell the browser to reuse my authentication for this image as well?
This is a part of the xmlHttp-request for getting the URI.
var xhr = new XMLHttpRequest();
xhr.open("GET", url, true);
xhr.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
xhr.setRequestHeader('Authorization','bearer '+token);
xhr.send(null);
xhr.onreadystatechange = function() {
console.log(xhr);
if (xhr.readyState == 4 && xhr.status == 200) {
var img = document.createElement('img');
img.src = xhr.responseURL;
document.body.appendChild(img);
}
}
Did I forgot something?
An image is a static file and I doubt you're going to be able to get OAuth2 protection when you request it via a simple URL. While you can add OAuth2 information in your URL that's probably not a good thing to do because it will expose to the client all the data which should be private and secure.
Maybe you can consider serving the image from a normal protected endpoint as a byte[] or Base64 string which you can then render in your client if you really need to protect the image itself.
So have a protected endpoint which serves the content of the image itself. Of course, you'd only do this if you actually need the image to be private and secure. If you don't then just let it be public and serve from a separate CDN. It doesn't really need to be behind an OAuth2 system.
Though I came here to search for a better answer than the one I have to offer here are my current solutions:
Serve the image as base64 / json via REST -> supports Oauth2 but will increase file size and just feels very wrong.
Use a cookie to sent the authentication.
On a website we are working on, we have a download link, that must be served to the user. However, when fetching the url the server can either serve an error message in JSON (the appropriate headers and an appropriate http status code will then be set) or serve the file.
Currently, we are using an iframe to download this file, but this prevents us from viewing the error message. While, this can be done in principle, it cannot be done cross-domain and the reading the error data seems to be different between browsers (as the browser will interpret the json as html and create html tags around it)
I have considered using xmlhttprequest2 to download the file, and serve it to the user, however the downloaded file can be large and thus, it must be streamed to the user.
Therefore, I'm looking for a way to either download a file or read the error message depending on the http status code.
API Setup
I'm able to change the API to my wishes, however the API is designed to be a public API and is designed as a REST API. This means, that the API should stay as simple as possible, and workarounds to make specific client-side code work should not have cause any hazzle for other client-side (thus the API and client-side code are decoupled).
The file that is being downloaded is encrypted on the server, and can only be decrypted by information given in the URL. Therefore, chunked transfer is difficult, as extracting a chunk would require the server to decrypt the whole file.
the appropriate headers and an appropriate http status code will then be set
If that statement is correct, you could use the same concept as preflighted requests for Cross-site requests. Preflighted requests first send an HTTP request with the OPTIONS method to the resource on the other domain, in order to determine whether the actual request is safe to send or not.
In your case, instead of having an OPTIONS request automatically send, you could send manually an HEAD request. The HEAD method is identical to GET except that the server do not return a message-body in the response. The informations contained in the HTTP headers in response to a HEAD request should be identical to the information sent in response to a GET request. Since you're only fetching the headers and not the body, you would have no problem with large or even any file, nothing is downloaded except the headers.
Then, you can read these headers or even the status code, and depending on the result of this manually preflighted request, decide if you should either stream the file to the user or fetch the error message if you're encountering an error.
A basic implementation without knowledge of your project using status code could be the following:
function canDownloadFile(url, callback)
{
var http = new XMLHttpRequest();
http.open('HEAD', url);
http.onreadystatechange = function() {
if (http.readyState === XMLHttpRequest.DONE) {
callback(http.status);
}
};
http.send();
}
var canDownloadCallback = function (statusCode) {
if (statusCode === 200) {
// The HEAD request returned an OK status code, the GET will do the same,
// let's download the file...
} else {
// The HEAD request returned something else, something is wrong,
// let's fetch the error message and maybe display it...
}
}
You can return to iframe JS code which would send message to parent window via postMessage, in this way you could capture errors in host window. There wouldn't be any problems with cross-domain, and content easily would be streamed to browser as it was before.
You could use single request, return response as a Blob, check Blob.type to determine where to utilize FileReader .result to retrieve JSON text from Blob and display error message, or set src of <img> element using URL.createObjectURL
var result = document.querySelector("div");
fetch("/path/to/resource")
.then(response => response.blob())
.then(blob => {
if (blob.type === "application/json") {
var reader = new FileReader();
reader.onload = (e) => {
result.innerHTML = JSON.parse(e.target.result).error
};
reader.readAsText(blob);
} else {
var img = document.createElement("img");
img.src = URL.createObjectURL(blob);
result.appendChild(img);
}
});
plnkr http://plnkr.co/edit/wrzLNm1Sp4k1mfNrYOlQ?p=preview
For testing I downloaded images from the net and uploaded using valum file upload in chrome...chrome is not sending session cookie along with these request header( I dont see that in the server side/though I see it on developer tool)...does chrome know that these images are from different domain . what is happening...Is there work around for this to pass the session id (as cookie). It is also happening in IE10 which makes me belive it is some standard. and not just a chrome issue. This problem is not there with firefox/safari/opera
It is fine when uploading to localhost. only when uploading to different server with domain name there is this problem leading to creating a new session for this.
Update:
I have added xhr.withCredentials = true still no use.
Also added on the server side to the upload url...
res.setHeader 'Access-Control-Allow-Origin', '*'
res.setHeader 'Access-Control-Allow-Credentials', true
I dont know how helpful this would be, because I would have already sent the upload file and response header will not of much help.
basically the problem is I don't have access to the session variable at the server side, since the session id/sid cookie is not coming back /I am not able to save some of this upload details into the current session(because this is a new session) .
Update:
I tried creating an image in teh desktop using paint..even then chrome would not sent the cookies. Really drives me crazy...
First of all, to get the basics out of the way, this is unrelated to the origin of the image. Chrome or other browsers don't care where you get your images.
It's rather difficult to guess exactly what's going on, would have helped to see a jsfiddle or some more setup explanation, but based on what I'm guessing, you might be using different domains for the page where the upload button is hosted and the target url where you're sending your files (even using ssl for one and http for the other makes it different). Even different subdomains will not allow cookies to be passed if the cookies were not set with a base domain (yourdomain.com)
So, if sub-domains are the problem, you know what to do - set a base domain so you get your cookies to go on any sub domain.
If it's http vs. https you need to always use https (or http) because you can't switch cookies between those two.
If that's not it, or if you're using completely different domains, you can access your cookies locally via script (if they're not marked as http only) and add them to the upload request. Valum 2.0 (don't know about v1.0) lets you add parameters to the request like so:
var uploader = new qq.FileUploader({
element: document.getElementById('file-uploader'),
action: '/server-side.upload',
// additional data to send, name-value pairs
params: {
param1: 'value1',
param2: 'value2'
}
});
You can't set cookies on a domain which is not the page's domain via script so for using completely different domains your only choice is using request params.
It is possible that the uploader is using Flash under some circumstances to do the upload. There is a bug in Flash which prevents cookies being sent for these types of requests. This would explain the behaviour you are seeing. The workaround is to pass in the sessionId and transmit it in a different way eg. querystring.