I have a standard file upload script using this script. When the upload is completed, I send back a JSON telling the client that the upload went OK, something like this:
{done: true, error: "No error"}
When I do the upload on Firefox, everything works out smoothly, but on IE9 / Chrome it breaks. IE tells me that I need to download the file, something like this image:
I thought that the issue was the headers submitted to the client and I tried setting the content type to:
application/javascript
text/javascript
The files are stored properly and the answer is coming back without any corruption, nor in the encoding, or gzipped or anything like it.
Any ideas?
EDIT: Forgot to add the link on the "this" and also, it's an older version of the plugin, not the current one.
I'll reply the question myself because I've found a solution, at least it works...
Thing is that when sending a request using an iframe, seems that the content type of the response shouldn't be either application/json or application/javascript or any other like it. My solution was to send the response as text/html, and do a JSON.parse on the client, and it works like a charm.
Since I all of my Ajax calls specify that I expect a JSON, it works ok when I make ajax calls as well, because jQuery handles the whole conversion, only thing that worries me is any problem related to performance on the client, but I see no signs of problem just yet...
Hope that anybody that runs with the problem may find my answer helpful!
I had this problem with the same upload widget and IE 8 in the past.
header('Content-Type: application/json') fixed it for me. Did you try this as well?
Related
As per here:
Can't upload file attachments to phpBB3 forum on IIS
I'm having a problem with the plupload implementation in phpBB3.
I find that I get through
phpbb.plupload.uploader.bind('FilesAdded', function(up, files)
but never get to
phpbb.plupload.uploader.bind('FileUploaded', function(up, file, response)
unless my file is very small (< 5 KB). Is this simply down to reponses from the server (or lack thereof)? Any tips on figuring out the actual problem? So I can stop trying out a bunch of random crap?
EDIT:
Fiddler shows that the upload simply doesn't get any response from my server. Using the un-minified plupload files I can see that (I think) everything for the XMLHttpRequest is properly constructed. I essentially get through uploadChunk to xhr.send, but never get to xhr.onload.
How do I debug the server side problem?
It turns out this was a problem with a generic reverse proxy rule on our firewall. Adding a rule to specifically transfer HTTP(s) traffic directly to our webserver (rather than through the reverse proxy table) fixed things.
The problem:
I work on an internal tool that allows users to upload images - and then displays those images back to them and others.
It's a Java/Spring application. I have the benefit of only needing to worry about IE11 exactly and Firefox v38+ (Chrome v43+ would be a nice to have)
After first developing the feature, it seems that users can just create a text file like:
<script>alert("malicious code here!")</script>
and save it as "maliciousImage.jpg" and upload it.
Later, when that image is displayed inside image tags like:
<img src="blah?imgName=foobar" id="someImageID">
actualImage.jpg displays normally, and maliciousImage.jpg displays as a broken link - and most importantly no malicious content is interpreted!
However If the user right-clicks on this broken link, and clicks 'view image'... bad things happen.
the browser does 'content-sniffing' a concept which is new to me, detects that 'maliciousImage.jpg' is actually a text file, and very kindly renders it as HTML without hesitation. Any script tags are passed to the JavaScript interpreter and, as you can imagine, we don't want this.
What I've tried so far
In short, every possible combination of response headers I can think of to prevent the browser from content-sniffing. All the answers I've found here on stackoverflow, and other docs, imply that setting the content-type header should prevent most browsers from content-sniffing, and setting X-content options should prevent some versions of IE.
I'm setting the x-content-type-options to no sniff, and I'm setting the response content type. The docs I've read lead me to believe this should stop content-sniffing.
response.setHeader("X-Content-Type-Options", "nosniff");
response.setContentType("image/jpg");
I'm intercepting the response and these headers are present, but seem to have no effect on how the malicious content is processed...
I've also tried detecting which images are and are not malicious at the point of upload, but I'm quickly realizing this is very much non-trivial...
End goal:
Naturally - any output at all for images that aren't really images (garbled nonsense, an unhandled exception, etc) would be better than executing the text-file as HTML/javascript in the clear, but displaying any malicious HTML as escaped/CDATA'd plain-text would be ideal... though maybe a bit impractical.
So I ended up fixing this problem but forgot to answer my own question:
Step 1: blocking invalid images
To get a quick fix out, I simply added some fairly blunt code that checked if an image was actually an image - during upload and before serving it, using the imageio lib:
import javax.imageio.ImageIO;
//......
Image img = attBO.getImage(imgId);
InputStream x = new ByteArrayInputStream(img.getData());
BufferedImage s;
try {
s = ImageIO.read(x);
s.getWidth();
} catch (Exception e) {
throw new myCustomException("Invalid image");
}
Now, initially i'd hoped that would fix my problem - but in reality it wasn't that simple and just made generating a payload more difficult.
While this would block:
<script>alert("malicious code here!")</script>
It's very possible to generate a valid image that's also an XSS payload - just a little more effort....
Step 2: framework silliness
It turned out there was an entire post-processing workflow that I'd never touched, that did things such as append tokens to response bodies and use additional frameworks to decorate responses with CSS, headers, footers etc.
This meant that, although the controller was explicitly returning image/png, it was being grabbed and placed (as bytes) post processing was taking that bytestream, and wrapping it in a header and footer, to form a fully qualified 'view' - this view would always have the 'content-type' text/html and thus was never displayed correctly.
The crux of this problem was that my controller was directly returning an image, in a RESTful fashion, when the rest of the framework was built to handle controllers returning full fledged views.
So I had to step through this workflow and create exceptions for the controllers in my code that returned something other than worked in a restful fashion.
for example with with site-mesh it was just an exclude(as always, simple fix once I understood the problem...):
<decorators defaultdir="/WEB-INF/decorators">
<excludes>
<pattern>*blah.ctl*</pattern>
</excludes>
<decorator name="foo" page="myDecorator.jsp">
<pattern>*</pattern>
</decorator>
and then some other other bespoke post-invocation interceptors.
Step 3: Content negotiation
Now, I finally got the stage where only image bytecode was being served and no review was being specified or explicitly generated.
A Spring feature called 'content negotiation' kicked in. It tries to reconcile the 'accepts' header of the request, with the 'messageconverters' it has on hand to produce such responses.
Because spring by default doesn't have a messageconverter to produce image/png responses, it was falling back to text/html - and I was still seeing problems.
Now, were I using spring 4, I could've simply added the annotation:
#Produces("image/png")
to my controller - simple fix...
Step 4: Legacy dependencies
but because I only had spring 3.0.5 (and couldn't upgrade it) I had to try other things.
I tried registering new messageconverters but that was a headache or adding a new post-method interceptor to simply change the content-type back to 'image/png' - but that was a hacky headache.
In the end I just exposed the request/reponse in the controller, and wrote my image directly to the response body - circumventing Spring's content-negotiation altogether
....and finally my image was served as an image and displayed as an image - and no injected code was executed!
That sounds odd, because it works perfectly elsewhere. Are you sure the X-Content-Type-Options header is present in the responses?
Here is a demo I built a while back, where I have a file that's a valid html, gif and javascript. As you can see it first loads as an HTML, but then loads itself as an image and as a script (which executes):
http://research.insecurelabs.org/content-sniffing/gifjs.html
However if you load it using the "X-Content-Type-Options: nosniff" header, the script no longer executes:
http://research.insecurelabs.org/content-sniffing/nosniff/gifjs.html
Btw, the image renders properly in FF/IE, but not in Chrome.
Here is a demo, where I attempted what you described:
http://research.insecurelabs.org/content-sniffing/stackexchange.html
First image is without nosniff, and second is with, and it seems to work as intended. Second one does not run the script when opened with "view image".
Edit:
Firefox doesn't seem to support X-Content-Type-Options: nosniff
So, you should also add "Content-disposition: attachment;filename=image.gif" or similar to the images. The image will load normally if loaded through an image tag, but if you open the URL directly, you will force a download instead of showing the image directly in the browser.
Example: http://research.insecurelabs.org/content-sniffing/attachment/
adeneo is pretty much spot-on. You should use whatever image library you want to check if the uploaded file is a valid file for the type it claims to be. Anything the client sends can be manipulated.
In our web portal we generate PDFs for certain kinds of data. The user downloads the PDF by clicking an tag that references something that we return with content-type: application/pdf;charset=utf-8
This works well when it works; the browser realizes that it is getting a PDF file and opens a internal or external PDF reader, or asks the user to save the file, depending on browser and user configuration.
We have some cases where we may fail to generate the PDF though. First we didn't handle the error, a NullPointerException fell through and we got an ugly new page full of JSON formatted garbage. Then we tried returning an empty result, which the browser thinks is fine and just saves or sends an empty file. Then I tried returning a redirect, which confused Chrome which showed an alert telling the user that something strange was happening.
The href in the tag is on the format "/module/showmypdf.cmd?pdfid=67482". This, as I said, works fine when a valid pdf is returned.
So, is there any kind of best practice for error handling when it comes to sending non-HTML files to browsers? Is there something else I could try to make the browser interpret my response as a redirect?
Ok I figured out why the redirect didn't work. I was doing this in my Java Spring controller:
response.sendRedirect("redirect:mypage.html?pdfError=true");
The "redirect:" prefix is something you can use when returning the view name from a controller. In the sendDirect() call it only adds confusion. Removing "redirect:" fixed it.
I'm having intermittent problems, trying to to upload files, using ajaxfileupload.js. Most times, the request has teh correct payload, as in A). But sometimes, the request gets sent off without the filename (and contents) as seen in this pastebin (B).
It seems to be similar to this problem. This post also talks about the problem. But I'm pretty sure I have the correctr element ID.
And this post suggests using the jquery.form plugin (here and here). But before I change components (having to re-engineer), I want to be sure there isn't an easy way to fix my current problem.
A)
…
Request Payload
------WebKitFormBoundaryXOoAbr8cm53B1pGS
Content-Disposition: form-data; name="convert"; filename="some-file.jpg"
Content-Type: application/octet-stream
…
B)
http://pastebin.com/ubEbb9dV
Has someone had this problem before? Is there a way to avoid it?
Thanks
----> EDIT
So this is how I'm calling the function. i) The inputId passed in definitely exists. And ii) the file selected definitely exists on the file system. And this works most of the time. But now that I think about it, I'm using this plugin in conjunction with the "jquery.jeditable.js" plugin. Could that or any other plugin be turfing some functions in "ajaxfileupload"?
$.ajaxFileUpload (
{
url: '/api/upload/image',
secureuri: false,
fileElementId: inputId,
success: successFn,
error: errorFn
}
);
By default jQuery use GET as request Method. In the case of the question you reference here, Steven is not setting correctly the method property of for the ajax request. What I want to see i how you are sending the form (Your javascript/Jquery code).
I have some js file generated with php., name by example "my_file.js.php", served with appropriate content-type for JS.
In FF, IE7, Chrome, etc.. all works fine, these files are correctly loaded.
But, in IE6, ths is more complicated :
- first load of the page : file is not loaded, and consequently, I have some JS errors on my page
- If I did a page refresh, all is now ok
Did anyone of us experience some strange behaviour like this ?
(it seems clearly linked to the fact the files are not "pure" .js files, because my other .js files are correctly loaded.
I got it !
After all, the problem was not linked to JS.. but my PHP.
I have a call to "session_start", and it seems that the "no-store" header sent by this call was no very well managed by IE6.
I added a session_cache_limiter('none'); before my call to session_stat and all is now OK.
For french speakers, here is some informations :
http://www.developpez.net/forums/d619691/php/langage/sessions/header-session_start-sous-ie/#post3691413
Well, it's hard to say without the source, but maybe you could try putting the defer attribute in your script. That way your script might get loaded after the body, circumventing the error.