Ajax / Header mismatch? - javascript

I'm hoping someone can answer this question for me, I'm not an expert on servers so please excuse me if I'm completely off base.
I'm using Android webview (PhoneGap 1.4.1) to make Ajax calls but I keep getting a ready state 4 status 0 on each call. I've spent the last couple hours investigating this and I may have figured out why. I used xhaus.com/headers to check my requests and found that in web view my "Accept" header is:
text/xml, text/html, application/xhtml+xml, image/png, text/plain, /;q=0.8
however, if I pull up the Android browser and check my header that way, I see that my "Accept" header is:
application/xml, application/xhtml+xml, text/html;q=0.9, text/plain;q=0.8, image/png, /;q=0.5
I checked the server that is providing the XML and found that the return heads are "Content-Type" is set to:
application/xml
My first question is: Webview doesn't seem to support "application/xml" type, so could this be the reason I'm having my issue? Or am I completely off base here?
Second question: Is there anything I can do on the client side to fix this or will the server admin need to make the change? I am using GET to make the request.
Third question: Is this normal? why would web view / browser have this sort of mismatch?
My app has been tested on 10+ handsets and only 2 have this issue... Very strange.
Thank you,

I may be wrong, but it sounds like you're calling a web service? Some services support mutliple calling strategies depending on the content-type header that you pass.
For example SharePoint web services support both SOAP 1.0 (if Content-type is sent as "text/xml; charset=utf-8") and SOAP 1.2 (if Content-type is set as "application/soap+xml").
In your case I would try setting a content-type of "text/xml; charset=utf-8" and see what happens.
As for the "accept" header if the server response isn't acceptable based on the passed value it should pass a status of 406 (not acceptable). It may simply be a bug in the server or in any of the steps between tho' that it's not. Using a tool like Fiddler you should be able to recreate both requests (both variations on "accept") and see exactly what the server responses with however - that may be the easiest way to truly get to the bottom of things.

Related

How to stop NodeJS "Request" module changes request when using proxy

Sorry if this comes off as confusing.
I have written a script using the NodeJS request module that runs and performs a function on a website then returns with the data. This script works perfectly fine when I do not use a proxy by setting it to false. This is not a task that is NOT allowed to be done with Selenium/puppeteer
proxy: false
However, when I set a (working) proxy. It fails to perform the same task and is detected by the website firewall/antibot software.
proxy: http://xx.xxx.xx.xx:3128
Some things to note:
I have tried many (20+) different proxy providers (Residential and Datacenter) and they all have this issue
The issue does not occur if that proxy is set globally on my system
The issue does not occur if that proxy is set in a chrome extension
The SSL cipher suites do not match Chrome but they still don't match when not using a proxy so I assume that isn't the issue
It is very important to keep consistency in the header order
The question basically is. Does the request module change anything when using a proxy such as the header order?
Here is an image of what happens when it passes/fails.
The only difference is changing the proxy that causes this to fail. One request being made with, one request being made without.
url : url,
simple : false,
forever: true,
resolveWithFullResponse: true,
gzip: true,
headers: {
'Host' : 'www.sitename.com',
'Connection' : 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36',
'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-encoding' : 'gzip, deflate, br',
'Accept-Language' : 'en-GB,en-US;q=0.9,en;q=0.8',
},
method : 'GET',
jar: globalJar,
simple: false,
followRedirect: false,
followAllRedirects: false,
After deactivating my old account I wanted to come back and give an actual answer to this question now I fully understand the answer. What I was asking one year ago was not possible, The antibot was fingerprinting me through the TLS ClientHello (And even slightly on the TCP/frame level).
To start, I wrote my a wrapper called request-curl which wrapped libcurl/curl binaries into a single library with the same format as request-promise, this gave me much more control over the request (preventing encoding, http2/proxy support and further session/TLS control) this still only let me reach a medicore rank of the 687th most popular ClientHello (https://client.tlsfingerprint.io:8443/). It wasn't good enough.
I had to move language. NodeJS is too much of a high-level language to allow for a really deep control (had to modify packets being sent from Layer 3). So as the answer to my question.
This is not yet possible to do in NodeJS - Let alone with the now unmaintained request.js library.
For anyone reading this, if you want to forge perfect requests to bypass antibot security you must move to a different language: I recommend utls in Golang or BouncyCastle in c#. Godspeed to you as it took me a year to really know how to do this. Even then, there's more internal issues these languages have and features they do not yet supposed (Go doesn't support 'basic' header-ordering, you need to monkey-patch/modify internals etc, utls doesn't easily support proxies). The list goes on and on.
If you're not already too deep into it, it's one hell of a rabbithole and I recommend you do not enter it.
According to the proxies documentation of the request module:
By default, when proxying http traffic, request will simply make a standard proxied http request. This is done by making the url section of the initial line of the request a fully qualified url to the endpoint.
Instead you can use a http tunnel by setting:
tunnel : true
in the request module proxy settings.
It could be that in your case, you are making a standard proxied http request, whereas when using a proxy globally on your system or a chrome extension a http tunnel is created.
From the documentation:
Note that, when using a tunneling proxy, the proxy-authorization header and any headers from custom proxyHeaderExclusiveList are never sent to the endpoint server, but only to the proxy server.
There are some scenarios that I can think of
Proxy is actually adding some headers to the final request (in order to identify you to the server)
The website you're trying to reach has your proxy IPs blacklisted (public/paid ones?)
It really depends on why you need to use that proxy
Is it because of network restrictions?
Is it because you want to hide the original request address?
Also, if you have control over the proxy server, can you log the requests being made to the final server?
My suggestion
Try writing your own proxy (a reverse one) and host it somewhere. Instead of requesting to https://target.com, to a request to your http[s]://proxy.com/ and let the reverse proxy do the work.
Also, remember to disable X headers on the implementation as it will change the request headers
Reference for node.js implementation:
https://github.com/nodejitsu/node-http-proxy
Note: let me know about the questions I made in the comments
You're using the http-scheme for you request, but if the webserver redirects http to https and if the proxy-server is not configured to accept redirects (to https) then the problem might only be about the scheme respectively the URL you enter.
So the proxy had to be configured to accept redirects or the URL has to be checked manually in the case of faults and then adjusted in the case of a redirect.
Here you can read about redirects on one proxy-server (Apache Traffic Server), the scenario there includes more redirects than I described above:
https://docs.trafficserver.apache.org/en/4.2.x/admin/reverse-proxy-http-redirects.en.html#handling-origin-server-redirect-responses
If you still encounter problems the server-logs of the proxy-server would be helpful.
EDIT:
According to he page #Jannes Botis linked there exist still more proxy-settings that might be able to support or disrupt the desired functionality, so the whole issue is perhaps about configuring the proxy-server correct. Here are a few settings that are directly related to redirects:
followRedirect - follow HTTP 3xx responses as redirects (default: true). This property can also be implemented as function which gets response object as a single argument and should return true if redirects should continue or false otherwise.
followAllRedirects - follow non-GET HTTP 3xx responses as redirects (default: false)
followOriginalHttpMethod - by default we redirect to HTTP method GET. you can enable this property to redirect to the original HTTP method (default: false)
maxRedirects - the maximum number of redirects to follow (default: 10)
removeRefererHeader - removes the referer header when a redirect happens (default: false). Note: if true, referer header set in the initial request is preserved during redirect chain.
It's quite possible that other settings of the proxy-server have impact on fail or success of your scenario too.

Why are my custom headers giving an "Adapter operation failed" error?

Aloha! Today, I'm trying to add custom headers to each request to my backend.
Playing with my DS.RESTAdapter, I already tried:
The 3 headers: solutions suggested in the official guide.
The 2 ajax: approaches proposed around there.
And 2 jQuery workarounds (based on $.ajaxPrefilter and $.ajaxSetup) that I found there.
Until now, my only result was this very obscure "Adapter operation failed" error:
{
details: "",
status: 0,
title: "The backend responded with an error"
}
I know that:
My backend behaves well and returns a 200 status (I tested sending the request via cURL).
Strangely, removing my adapter's host setting allows the request to be sent, but obviously at the wrong URL.
My problem is not a CSP issue as I'm currently running both backend & frontend locally.
According to my debugging and to my Network Inspector tab, the AJAX request is just never sent (XHR.readyStatus is stuck at 0).
Has somebody already faced this?
Any help would be really lovely!
Ember 1.13.11
Ember Data 1.13.15
jQuery 1.11.3
EDIT: Magic minimal app reproducing the bug is out here!
Hope you'll enjoy it! And because I love you so much, I also offered a demo API endpoint on my server. Details in the FM!
BONUS! Do you know what is the coolest thing to put in a clipboard?
git clone https://github.com/imbrou/ember-data-headers-demo.git
Yeeeeeeha! (-:
Usually "Adapter operation failed" error occurs because your application is having problems connecting to the backend, usually DS.RESTAdapter is not correctly setup, make sure your host and namespace are correct.
Example:
export default DS.RESTAdapter.extend({
host: 'http://193.137.170.210:8080',
namespace: '/api'
});
Solved !
My backend was not sending the correct CORS headers.
The tricky thing is that for an unknown reason, my version of Firefox (Developer Edition...) didn't display the failing OPTIONS request in my Network Inspector at the point of my debugging. I thus had no debugging information at all there.
I could only observe the failing preflight using... Wireshark !
It may have been a bug solved in a Christmas update, as I can't reproduce it today. Too bad...
Anyway, in desperation, I linked 3 screenshots:
No-preflight example: no backend security (no "authorization" token).
Working example: the "authorization" header is requested by client, and allowed by server in the response during the preflight.
Failing example: the "authorization" header is requested by the client, BUT not allowed by the server.
Hope it helps, thanks #VĂ­tor for your support !

Http-Method changes from POST to OPTIONS when changing Content-Type

I am using closure library to do a simple POST. I think XhrIo should work because from my machine when I use any other rest client ,like Firefox browser app RESTClient or Chrome's Simple Rest Client , I can make POST request to the server and content type is application/json.
But from my application I am unable to make a post.
I am using the following code
xhr = new goog.net.XhrIo;
xhr.send('http://myhost:8181/customer/add','POST', goog.json.serialize(data));
If I leave the headers default, I get this
Encoding: UTF-8
Http-Method: POST
Content-Type: application/x-www-form-urlencoded;charset=UTF-8
If I try to change the header by passing {'content-type':'application/json'} as 4th parameter, header changes to
Http-Method: OPTIONS
Content-Type:
Shouldn't I be able to change headers appropriately with Closure library just as RESTClient does with XMLHttpRequest using JQuery ?
How else can the header be altered to make it appear like this
Encoding: UTF-8
Http-Method: POST
Content-Type: application/json;charset=UTF-8
Appreciate any help
Eddie
When you add a header to an XHR object, most browsers will do a preflight request, which is the OPTIONS method that you are seeing. There is not a way to circumvent this if you are adding custom headers, unfortunately. The POST will be sent after the OPTIONS.
This article explains the OPTIONS request a bit. I ran into issues with the preflight a while back, if that is any help.
If you have specific issues with the OPTIONS request you should edit your question to include them; otherwise, this is expected behavior.
FWIW mine also failed to update the type when I specified...
{'content-type':'application/json'}
However, if I corrected the case to
{'Content-Type':'application/json'}
... it worked.
Go figure.
If you are pass Content-Type on authorization request it will convert POST method to OPTIONS method so while we are use ouath and passing authorization token that time do not required Content-Type.
So do not pass Content-Type on all authorization request it won't change your method POST to OPTIONS

how to serve pre-flight request from web service

I have a web service which works over GET. To access this web service, some custom headers need to be passed.
When I try to access the web service from javascript code with GET method, the request method is getting changed to OPTIONS. (the domain is different)
I read some articles to find out that a request with Custom headers will be pre-flighted and in that case before the actual method call, a request with OPTIONS method will be made to the server.
But my problem is after the OPTIONS call, the real method (i.e GET) is not being invoked.
The OPTIONS call is returning the status as 401.
I doubt this is because my web-service supports GET only. How can I solve the problem?
Kindly help.
(My code is working fine with IE but not with other browser e.g. Chrome)
Two things to check for (with no idea what your server-side language / technique is):
Are you including OPTIONS as a valid method in your Access-Control-Allow-Methods? Example:
Access-Control-Allow-Methods: GET, OPTIONS
Are the custom headers that your request sending being returned to the browser as allowed?
Example:
Access-Control-Allow-Headers: X-PINGOTHER
The remote server has to return both of these (and most definitely the second one) before any secure, standards-compliant browser (ie not older versions of IE), will allow the non-origin response to come through.
So, if you wanted to implement this at the HTTP server level and keep your web-service portable, you might try the following:
We'll assume your web-service URL is http://example.org/service and that the path to service is /srv/www/service
If you are running Apache 2.0, the syntax to append headers is add, on 2.2, use set.
So, you would modify /srv/www/service/.htaccess with:
Header set Access-Control-Allow-Methods "GET, OPTIONS"
Header set Access-Control-Allow-Headers "X-MY_CUSTOM_HEADER1,X-MY_CUSTOM_HEADER2"
Header set Access-Control-Allow-Origin "*"
Of course, the mod_headers Apache module needs to be on for the above to work. Also, setting the allow-origin to the wild card is risky, and it will actually cause the request to fail if you are sending the Access-Control-Allow-Credentials: true header (can't use wild cards in that case). Also, with the SetEnvIf mod for Apache, you could fine tune the htaccess file to only return the headers when appropriate, rather than for all requests to that directory.

Hard refresh and XMLHttpRequest caching in Internet Explorer/Firefox

I make an Ajax request in which I set the response cacheability and last modified headers:
if (!String.IsNullOrEmpty(HttpContext.Current.Request.Headers["If-Modified-Since"]))
{
HttpContext.Current.Response.StatusCode = 304;
HttpContext.Current.Response.StatusDescription = "Not Modified";
return null;
}
HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.Public);
HttpContext.Current.Response.Cache.SetLastModified(DateTime.UtcNow);
This works as expected. The first time I make the Ajax request, I get 200 OK. The second time I get 304 Not Modified.
When I hard refresh in Chrome (Ctrl+F5), I get 200 OK - fantastic!
When I hard refresh in Internet Explorer/Firefox, I get 304 Not Modified. However, every other resource (JS/CSS/HTML/PNG) returns 200 OK.
The reason is because the "If-Not-Modified" header is sent for XMLHttpRequest's regardless of hard refresh in those browsers. I believe Steve Souders documents it here.
I have tried setting an ETag and conditioning on "If-None-Match" to no avail (it was mentioned in the comments on Steve Souders page).
Has anyone got any gems of wisdom here?
Thanks,
Ben
Update
I could check the "If-Modified-Since" against a stored last modified date. However, hopefully this question will help other SO users who find the header to be set incorrectly.
Update 2
Whilst the request is sent with the "If-Modified-Since" header each time. Internet Explorer won't even make the request if an expiry isn't set or is set to a future date. Useless!
Update 3
This might as well be a live blog now. Internet Explorer doesn't bother making the second request when localhost. Using a real IP or the loopback will work.
Prior to IE10, IE does not apply the Refresh Flags (see http://blogs.msdn.com/b/ieinternals/archive/2010/07/08/technical-information-about-conditional-http-requests-and-the-refresh-button.aspx) to requests that are not made as a part of loading of the document.
If you want, you can adjust the target URL to contain a nonce to prevent the cached copy from satisfying a future request. Alternatively, you can send max-age=0 to force IE to conditionally revalidate the resource before each reuse.
As for why the browser reuses a cached resource that didn't specify a lifetime, please see http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx
The solution i came upon for consistent control was managing the cache headers for all request types.
So, I forced standard requests the same as XMLHttpRequests, which was telling IE to use the following cache policy: Cache-Control: private, max-age=0.
For some reason, IE was not honoring headers for various requests types. For example, my cache policy for standard requests defaulted to the browser and for XMLHttpRequests, it was set to the aforementioned control policy. However, making a request to something like /url as a standard get request, render the result properly. Unfortunately, making the same request to /url as an XMLHttpRequest, would not even hit the server because the get request was cached and the XMLHttpRequest was hitting the same url.
So, either force your cache policy on all fronts or make sure you're using different access points (uri's) for your request types. My solution was the former.

Categories

Resources