I'm using Node.js' Soap Client (https://github.com/vpulim/node-soap) for some extensive integration with other service. However, this is not the only integration running on my server, and I would like to make my calls from another network interface (i.e. "external IP")
My software layer is pretty much complete, but this is something that I've not predicted. Can I possibly do this with some setting, or maybe some Node.js' launch argument?
I was thinking about a locally running proxy server (even in the same thread as the app), but - if possible - I'd welcome some more elegant option.
OK, i've managed to achieve this by using request module (https://github.com/request/request) and simply changing one line of code:
soap.createClientAsync('https://example.com/service.php?wsdl', {
'request': request.defaults({
localAddress: 'xxx.xxx.xxx.xxx',
connection: 'keep-alive'
})
})
Related
Sorry if this comes off as confusing.
I have written a script using the NodeJS request module that runs and performs a function on a website then returns with the data. This script works perfectly fine when I do not use a proxy by setting it to false. This is not a task that is NOT allowed to be done with Selenium/puppeteer
proxy: false
However, when I set a (working) proxy. It fails to perform the same task and is detected by the website firewall/antibot software.
proxy: http://xx.xxx.xx.xx:3128
Some things to note:
I have tried many (20+) different proxy providers (Residential and Datacenter) and they all have this issue
The issue does not occur if that proxy is set globally on my system
The issue does not occur if that proxy is set in a chrome extension
The SSL cipher suites do not match Chrome but they still don't match when not using a proxy so I assume that isn't the issue
It is very important to keep consistency in the header order
The question basically is. Does the request module change anything when using a proxy such as the header order?
Here is an image of what happens when it passes/fails.
The only difference is changing the proxy that causes this to fail. One request being made with, one request being made without.
url : url,
simple : false,
forever: true,
resolveWithFullResponse: true,
gzip: true,
headers: {
'Host' : 'www.sitename.com',
'Connection' : 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36',
'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-encoding' : 'gzip, deflate, br',
'Accept-Language' : 'en-GB,en-US;q=0.9,en;q=0.8',
},
method : 'GET',
jar: globalJar,
simple: false,
followRedirect: false,
followAllRedirects: false,
After deactivating my old account I wanted to come back and give an actual answer to this question now I fully understand the answer. What I was asking one year ago was not possible, The antibot was fingerprinting me through the TLS ClientHello (And even slightly on the TCP/frame level).
To start, I wrote my a wrapper called request-curl which wrapped libcurl/curl binaries into a single library with the same format as request-promise, this gave me much more control over the request (preventing encoding, http2/proxy support and further session/TLS control) this still only let me reach a medicore rank of the 687th most popular ClientHello (https://client.tlsfingerprint.io:8443/). It wasn't good enough.
I had to move language. NodeJS is too much of a high-level language to allow for a really deep control (had to modify packets being sent from Layer 3). So as the answer to my question.
This is not yet possible to do in NodeJS - Let alone with the now unmaintained request.js library.
For anyone reading this, if you want to forge perfect requests to bypass antibot security you must move to a different language: I recommend utls in Golang or BouncyCastle in c#. Godspeed to you as it took me a year to really know how to do this. Even then, there's more internal issues these languages have and features they do not yet supposed (Go doesn't support 'basic' header-ordering, you need to monkey-patch/modify internals etc, utls doesn't easily support proxies). The list goes on and on.
If you're not already too deep into it, it's one hell of a rabbithole and I recommend you do not enter it.
According to the proxies documentation of the request module:
By default, when proxying http traffic, request will simply make a standard proxied http request. This is done by making the url section of the initial line of the request a fully qualified url to the endpoint.
Instead you can use a http tunnel by setting:
tunnel : true
in the request module proxy settings.
It could be that in your case, you are making a standard proxied http request, whereas when using a proxy globally on your system or a chrome extension a http tunnel is created.
From the documentation:
Note that, when using a tunneling proxy, the proxy-authorization header and any headers from custom proxyHeaderExclusiveList are never sent to the endpoint server, but only to the proxy server.
There are some scenarios that I can think of
Proxy is actually adding some headers to the final request (in order to identify you to the server)
The website you're trying to reach has your proxy IPs blacklisted (public/paid ones?)
It really depends on why you need to use that proxy
Is it because of network restrictions?
Is it because you want to hide the original request address?
Also, if you have control over the proxy server, can you log the requests being made to the final server?
My suggestion
Try writing your own proxy (a reverse one) and host it somewhere. Instead of requesting to https://target.com, to a request to your http[s]://proxy.com/ and let the reverse proxy do the work.
Also, remember to disable X headers on the implementation as it will change the request headers
Reference for node.js implementation:
https://github.com/nodejitsu/node-http-proxy
Note: let me know about the questions I made in the comments
You're using the http-scheme for you request, but if the webserver redirects http to https and if the proxy-server is not configured to accept redirects (to https) then the problem might only be about the scheme respectively the URL you enter.
So the proxy had to be configured to accept redirects or the URL has to be checked manually in the case of faults and then adjusted in the case of a redirect.
Here you can read about redirects on one proxy-server (Apache Traffic Server), the scenario there includes more redirects than I described above:
https://docs.trafficserver.apache.org/en/4.2.x/admin/reverse-proxy-http-redirects.en.html#handling-origin-server-redirect-responses
If you still encounter problems the server-logs of the proxy-server would be helpful.
EDIT:
According to he page #Jannes Botis linked there exist still more proxy-settings that might be able to support or disrupt the desired functionality, so the whole issue is perhaps about configuring the proxy-server correct. Here are a few settings that are directly related to redirects:
followRedirect - follow HTTP 3xx responses as redirects (default: true). This property can also be implemented as function which gets response object as a single argument and should return true if redirects should continue or false otherwise.
followAllRedirects - follow non-GET HTTP 3xx responses as redirects (default: false)
followOriginalHttpMethod - by default we redirect to HTTP method GET. you can enable this property to redirect to the original HTTP method (default: false)
maxRedirects - the maximum number of redirects to follow (default: 10)
removeRefererHeader - removes the referer header when a redirect happens (default: false). Note: if true, referer header set in the initial request is preserved during redirect chain.
It's quite possible that other settings of the proxy-server have impact on fail or success of your scenario too.
I am implementing a virtual agent using IBM Watson services. My application is developed using Jquery, Angular JS & Java.Currently i am calling the watson services from middle layer that is java. But i want to avoid that and call directly from javascript.When i call from javascript using XML Http request, i am getting CORS error.How to solve this?
Below is my code:
var username = "uid";
var password = "pwd";
var xhr = new XMLHttpRequest();
xhr.open('GET', 'url');
//xhr.withCredentials = true;
xhr.setRequestHeader("Access-Control-Allow-Headers", "Access-Control-Allow-Origin,Content-Type, application/json, Authorization");
xhr.setRequestHeader("Access-Control-Allow-Origin", "*");
xhr.setRequestHeader('Access-Control-Allow-Credentials', '*');
xhr.setRequestHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT, PATCH, DELETE');
xhr.setRequestHeader('Content-Type', undefined);
xhr.setRequestHeader('Authorization', 'Basic ' + btoa(username + " " + password));
xhr.send('"query":"hi"');
The IBM Watson services don’t yet support getting cross-origin requests from browser-based apps.
See the answer at Can't access IBM Watson API locally due to CORS on a Rails/AJAX App:
We don't support CORS, we are working on it but in your case Visual Recognition is not supported yet.
That implies some of the services support CORS but I guess the one you’ve tried isn’t one of them.
So other than what you say you’re doing now (accessing the services from your server-side Java layer instead), your only option to get at the services from JavaScript code running in a web app is, either set up your own server-side proxy with https://github.com/Rob--W/cors-anywhere or such, or send your requests through an open CORS proxy like https://cors-anywhere.herokuapp.com/ (though it’s unlikely you’ll want to do that in the case where your requests include any kind of authentication token that you don’t want to expose to the operator of a third-party proxy service).
The way such proxies works is, instead of using https://gateway.watsonplatform.net/some/api as the request URL that specify in your client-side JavaScript code, you instead specify the proxy URL, like https://cors-anywhere.herokuapp.com/https://gateway.watsonplatform.net/some/api, and the proxy sends the actual request to the service, gets back the response, and adds the needed Access-Control-Allow-Origin response header and other headers to it and passes it on.
So that response with the CORS headers included is what the browser sees.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS has more details about how CORS works, but the main thing to know is that the browser is the CORS enforcement point. So in the case with the Watson services, the browser will actually get the response from the Watson API—you will be able to use devtools in the browser to see the response—but the browser will expose the response to your client-side JavaScript code only if the response includes the Access-Control-Allow-Origin response header to indicate the server that sent the response has opted in to receiving cross-origin requests from client-side JavaScript running in web apps.
So that’s why, regardless, all the xhr.setRequestHeader("Access-Control-Allow- lines in your XHR code snippet above need to just be removed—because Access-Control-Allow-* headers are response headers, not request headers; sending them in a request to a server has no effect on CORS, because as noted above, the browser’s the CORS enforcement point, not the server.
So it’s not the case that the server receives some request from a browser and says, OK I see this request has the right headers, so I’ll allow it. Instead the server allows all requests from browsers, just as it allows all requests from non-browser tools like your Java code or curl or Postman or whatever (as long as they are authenticated of course) and sends a response.
The difference is, when a non-browser-based app receives a response, it doesn’t refuse to let you access the response if it lacks the Access-Control-Allow-Origin header. But the browser does refuse to let your client-side JavaScript web-app code access the response if it lacks that.
You might also want to look at some of the Watson SDK's available on GitHub.
Some Watson services support CORS, others do not. However, when accessing over CORS, you must use an Auth Token rather than a username/password combination*.
This is a partial list of which services support CORS: https://github.com/watson-developer-cloud/node-sdk/tree/master/examples/webpack#important-notes
Here are a couple of examples using the Node.js SDK:
Webpack: https://github.com/watson-developer-cloud/node-sdk/tree/master/examples/webpack
Browserify: https://github.com/watson-developer-cloud/node-sdk/tree/master/examples/browserify
And, a whole host of examples with the Speech JavaScript SDK:
https://watson-speech.mybluemix.net/
* There are a couple of services that use API keys rather than username/password combinations. In that case, you can use the API key directly from client-side code if the service supports CORS.
take a look at this tutorial on IBM developerWorks on using Watson's Question and Answer service -
http://www.ibm.com/developerworks/cloud/library/cl-watson-qaapi-app/index.html#N10229
I am trying to build a quick demo site that I do not have control over the server I am trying to connect to. Here is the code that I am using to build it with AngularJS. I am running the file through a simple Python HTTP Server and viewing it at localhost:8000.
var retrieveAppliances = function () {
console.log('Attempting to retrieve appliance list.');
var requestUrl = '****';
$http({
method: 'GET',
url: requestUrl,
})
.then(function (response) {
console.log(response);
});
};
retrieveAppliances();
I have read multiple places to try switching the method to JSONP but doing so resulted in a parsing error.
While I have considered trying to build a server.js file and running NodeJS with it, I am unsuccessful in learning the basics of making an AJAX request and proxying that to my app.js.
I will greatly appreciate any help that someone may be able to give me, with clear and easy to follow steps.
If you're running an Ajax call to a different origin (e.g. different host, port or protocol) and the server at that origin does not have support for cross origin requests, then you cannot fix that from your client. There is nothing you can do from the client.
If the server supported JSONP, you could use that, but that also requires specific server support.
The only solutions from a browser web page are:
CORS support on the target server.
JSONP (also requires support on the target server).
Set up your own server that you do have access to (either on your existing page domain or with CORS) and then have that server get the file/data for you and proxy it back to you. You can either write your own proxy or deploy a pre-built proxy.
Find some existing third party proxy service that you can use.
If you're interested in making your own node.js proxy, you can see a simple example here: How to create a simple http proxy in node.js?.
Aloha! Today, I'm trying to add custom headers to each request to my backend.
Playing with my DS.RESTAdapter, I already tried:
The 3 headers: solutions suggested in the official guide.
The 2 ajax: approaches proposed around there.
And 2 jQuery workarounds (based on $.ajaxPrefilter and $.ajaxSetup) that I found there.
Until now, my only result was this very obscure "Adapter operation failed" error:
{
details: "",
status: 0,
title: "The backend responded with an error"
}
I know that:
My backend behaves well and returns a 200 status (I tested sending the request via cURL).
Strangely, removing my adapter's host setting allows the request to be sent, but obviously at the wrong URL.
My problem is not a CSP issue as I'm currently running both backend & frontend locally.
According to my debugging and to my Network Inspector tab, the AJAX request is just never sent (XHR.readyStatus is stuck at 0).
Has somebody already faced this?
Any help would be really lovely!
Ember 1.13.11
Ember Data 1.13.15
jQuery 1.11.3
EDIT: Magic minimal app reproducing the bug is out here!
Hope you'll enjoy it! And because I love you so much, I also offered a demo API endpoint on my server. Details in the FM!
BONUS! Do you know what is the coolest thing to put in a clipboard?
git clone https://github.com/imbrou/ember-data-headers-demo.git
Yeeeeeeha! (-:
Usually "Adapter operation failed" error occurs because your application is having problems connecting to the backend, usually DS.RESTAdapter is not correctly setup, make sure your host and namespace are correct.
Example:
export default DS.RESTAdapter.extend({
host: 'http://193.137.170.210:8080',
namespace: '/api'
});
Solved !
My backend was not sending the correct CORS headers.
The tricky thing is that for an unknown reason, my version of Firefox (Developer Edition...) didn't display the failing OPTIONS request in my Network Inspector at the point of my debugging. I thus had no debugging information at all there.
I could only observe the failing preflight using... Wireshark !
It may have been a bug solved in a Christmas update, as I can't reproduce it today. Too bad...
Anyway, in desperation, I linked 3 screenshots:
No-preflight example: no backend security (no "authorization" token).
Working example: the "authorization" header is requested by client, and allowed by server in the response during the preflight.
Failing example: the "authorization" header is requested by the client, BUT not allowed by the server.
Hope it helps, thanks #Vítor for your support !
I'm trying to use jquery.couch.js to do couch operations in my ember.js app, but I'm having cors problems, and I have no clue what a good solution is.
It seems to me that couch running on port 5984 would make it basically unusable? Why do requests to different ports cause cors problems? And how on earth do OTHER people end up getting couch to work? I'm immensely confused, and not sure how to proceed.
My couch instance returns this from curl:
{"couchdb":"Welcome","version":"1.2.0"}
The code I'm unsuccessfully trying to run is this:
$.couch.urlPrefix = "http://127.0.0.1:5984";
$.couch.login({
name: 'name',
password: 'secret'
});
I've modifed the urlPrefix part several times to things like localhost and removing the http:// for both versions.
The error it's throwing:
XMLHttpRequest cannot load http://127.0.0.1:5984/_session. Origin http://localhost is not allowed by Access-Control-Allow-Origin.
Help me! I humbly recognize my noobiness for saying this, but how is couchdb even useful if this is built right into the basic functionality?
Oh and I'm including jquery.couch.js like this:
<script src="http://localhost:5984/_utils/script/jquery.couch.js"></script>
Using this version of jquery:
jQuery JavaScript Library v1.10.2
and using jquery migrate because of previous issues:
<script src="http://code.jquery.com/jquery-migrate-1.2.1.js"></script>
Edit
I just now tried to add crossDomain: true, xhrFields: {withCredentials: true} to my login call, to no avail. Exact same error message. I'm clearly missing a core concept.
The message you are seeing is referring to the server, not the client. Changes made to the client's call will not, as you reported, change the result.
In CouchDB 1.4 specifically, CORS support must be explicitly enabled and an origins declaration must be made. That said, depending on how you are using your CouchDB instance there are two ways to enable it:
Change the setting in your local.ini directly and restart your instance, see here for more info: http://wiki.apache.org/couchdb/CORS
In the case you have futon available, go to Settings and find the setting there and enable it, in this case no restart is needed.
Update
It seems that the CORS section is not always existent by default, in this case just add it yourself.
Hope it helps.
For those who are using Cookie authentication (not password authentication) and are reusing the cookie in the Ajax request returned by the CouchDB server, you still need to do this in your $.ajax() requests to CouchDB:
xhrFields: {withCredentials: true},
Which, means you have to open the jquery.couch.js file that you sourced from the couch server and manually insert that option into the javascript.
CORS didn't work for me without both doing this on the client side and setting "credentials=true" on the server side.
The original jquery.couch.js as it is written right now doesn't support the client side sending Cookies with CORS, so you have to do it yourself until someone opens a ticket to get this fixed.