Couchdb cors problems - javascript

I'm trying to use jquery.couch.js to do couch operations in my ember.js app, but I'm having cors problems, and I have no clue what a good solution is.
It seems to me that couch running on port 5984 would make it basically unusable? Why do requests to different ports cause cors problems? And how on earth do OTHER people end up getting couch to work? I'm immensely confused, and not sure how to proceed.
My couch instance returns this from curl:
{"couchdb":"Welcome","version":"1.2.0"}
The code I'm unsuccessfully trying to run is this:
$.couch.urlPrefix = "http://127.0.0.1:5984";
$.couch.login({
name: 'name',
password: 'secret'
});
I've modifed the urlPrefix part several times to things like localhost and removing the http:// for both versions.
The error it's throwing:
XMLHttpRequest cannot load http://127.0.0.1:5984/_session. Origin http://localhost is not allowed by Access-Control-Allow-Origin.
Help me! I humbly recognize my noobiness for saying this, but how is couchdb even useful if this is built right into the basic functionality?
Oh and I'm including jquery.couch.js like this:
<script src="http://localhost:5984/_utils/script/jquery.couch.js"></script>
Using this version of jquery:
jQuery JavaScript Library v1.10.2
and using jquery migrate because of previous issues:
<script src="http://code.jquery.com/jquery-migrate-1.2.1.js"></script>
Edit
I just now tried to add crossDomain: true, xhrFields: {withCredentials: true} to my login call, to no avail. Exact same error message. I'm clearly missing a core concept.

The message you are seeing is referring to the server, not the client. Changes made to the client's call will not, as you reported, change the result.
In CouchDB 1.4 specifically, CORS support must be explicitly enabled and an origins declaration must be made. That said, depending on how you are using your CouchDB instance there are two ways to enable it:
Change the setting in your local.ini directly and restart your instance, see here for more info: http://wiki.apache.org/couchdb/CORS
In the case you have futon available, go to Settings and find the setting there and enable it, in this case no restart is needed.
Update
It seems that the CORS section is not always existent by default, in this case just add it yourself.
Hope it helps.

For those who are using Cookie authentication (not password authentication) and are reusing the cookie in the Ajax request returned by the CouchDB server, you still need to do this in your $.ajax() requests to CouchDB:
xhrFields: {withCredentials: true},
Which, means you have to open the jquery.couch.js file that you sourced from the couch server and manually insert that option into the javascript.
CORS didn't work for me without both doing this on the client side and setting "credentials=true" on the server side.
The original jquery.couch.js as it is written right now doesn't support the client side sending Cookies with CORS, so you have to do it yourself until someone opens a ticket to get this fixed.

Related

How to stop NodeJS "Request" module changes request when using proxy

Sorry if this comes off as confusing.
I have written a script using the NodeJS request module that runs and performs a function on a website then returns with the data. This script works perfectly fine when I do not use a proxy by setting it to false. This is not a task that is NOT allowed to be done with Selenium/puppeteer
proxy: false
However, when I set a (working) proxy. It fails to perform the same task and is detected by the website firewall/antibot software.
proxy: http://xx.xxx.xx.xx:3128
Some things to note:
I have tried many (20+) different proxy providers (Residential and Datacenter) and they all have this issue
The issue does not occur if that proxy is set globally on my system
The issue does not occur if that proxy is set in a chrome extension
The SSL cipher suites do not match Chrome but they still don't match when not using a proxy so I assume that isn't the issue
It is very important to keep consistency in the header order
The question basically is. Does the request module change anything when using a proxy such as the header order?
Here is an image of what happens when it passes/fails.
The only difference is changing the proxy that causes this to fail. One request being made with, one request being made without.
url : url,
simple : false,
forever: true,
resolveWithFullResponse: true,
gzip: true,
headers: {
'Host' : 'www.sitename.com',
'Connection' : 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36',
'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-encoding' : 'gzip, deflate, br',
'Accept-Language' : 'en-GB,en-US;q=0.9,en;q=0.8',
},
method : 'GET',
jar: globalJar,
simple: false,
followRedirect: false,
followAllRedirects: false,
After deactivating my old account I wanted to come back and give an actual answer to this question now I fully understand the answer. What I was asking one year ago was not possible, The antibot was fingerprinting me through the TLS ClientHello (And even slightly on the TCP/frame level).
To start, I wrote my a wrapper called request-curl which wrapped libcurl/curl binaries into a single library with the same format as request-promise, this gave me much more control over the request (preventing encoding, http2/proxy support and further session/TLS control) this still only let me reach a medicore rank of the 687th most popular ClientHello (https://client.tlsfingerprint.io:8443/). It wasn't good enough.
I had to move language. NodeJS is too much of a high-level language to allow for a really deep control (had to modify packets being sent from Layer 3). So as the answer to my question.
This is not yet possible to do in NodeJS - Let alone with the now unmaintained request.js library.
For anyone reading this, if you want to forge perfect requests to bypass antibot security you must move to a different language: I recommend utls in Golang or BouncyCastle in c#. Godspeed to you as it took me a year to really know how to do this. Even then, there's more internal issues these languages have and features they do not yet supposed (Go doesn't support 'basic' header-ordering, you need to monkey-patch/modify internals etc, utls doesn't easily support proxies). The list goes on and on.
If you're not already too deep into it, it's one hell of a rabbithole and I recommend you do not enter it.
According to the proxies documentation of the request module:
By default, when proxying http traffic, request will simply make a standard proxied http request. This is done by making the url section of the initial line of the request a fully qualified url to the endpoint.
Instead you can use a http tunnel by setting:
tunnel : true
in the request module proxy settings.
It could be that in your case, you are making a standard proxied http request, whereas when using a proxy globally on your system or a chrome extension a http tunnel is created.
From the documentation:
Note that, when using a tunneling proxy, the proxy-authorization header and any headers from custom proxyHeaderExclusiveList are never sent to the endpoint server, but only to the proxy server.
There are some scenarios that I can think of
Proxy is actually adding some headers to the final request (in order to identify you to the server)
The website you're trying to reach has your proxy IPs blacklisted (public/paid ones?)
It really depends on why you need to use that proxy
Is it because of network restrictions?
Is it because you want to hide the original request address?
Also, if you have control over the proxy server, can you log the requests being made to the final server?
My suggestion
Try writing your own proxy (a reverse one) and host it somewhere. Instead of requesting to https://target.com, to a request to your http[s]://proxy.com/ and let the reverse proxy do the work.
Also, remember to disable X headers on the implementation as it will change the request headers
Reference for node.js implementation:
https://github.com/nodejitsu/node-http-proxy
Note: let me know about the questions I made in the comments
You're using the http-scheme for you request, but if the webserver redirects http to https and if the proxy-server is not configured to accept redirects (to https) then the problem might only be about the scheme respectively the URL you enter.
So the proxy had to be configured to accept redirects or the URL has to be checked manually in the case of faults and then adjusted in the case of a redirect.
Here you can read about redirects on one proxy-server (Apache Traffic Server), the scenario there includes more redirects than I described above:
https://docs.trafficserver.apache.org/en/4.2.x/admin/reverse-proxy-http-redirects.en.html#handling-origin-server-redirect-responses
If you still encounter problems the server-logs of the proxy-server would be helpful.
EDIT:
According to he page #Jannes Botis linked there exist still more proxy-settings that might be able to support or disrupt the desired functionality, so the whole issue is perhaps about configuring the proxy-server correct. Here are a few settings that are directly related to redirects:
followRedirect - follow HTTP 3xx responses as redirects (default: true). This property can also be implemented as function which gets response object as a single argument and should return true if redirects should continue or false otherwise.
followAllRedirects - follow non-GET HTTP 3xx responses as redirects (default: false)
followOriginalHttpMethod - by default we redirect to HTTP method GET. you can enable this property to redirect to the original HTTP method (default: false)
maxRedirects - the maximum number of redirects to follow (default: 10)
removeRefererHeader - removes the referer header when a redirect happens (default: false). Note: if true, referer header set in the initial request is preserved during redirect chain.
It's quite possible that other settings of the proxy-server have impact on fail or success of your scenario too.

(CORS) - Cross-Origin Resource Sharing connection issue

I am currently in the process of creating a browser extension for a university project. However as I was writing down the extension I hit a really weird problem. To understand fully my situation I will need to describe it in debt from where my issue comes.
The extension that I am currently working on has to have a feature that checks if the browser can connect to the internet or not. That is why I decided to create a very simple AJAX request function and depending on the result returned by this function to determine if the user has internet connection or not.
That is why I created this very simple AJAX function that you can see bellow this line.
$.ajax({
url: "https://enable-cors.org/index.html",
crossDomain: true,
}).done(function() {
console.log("The link is active");
}).fail(function() {
console.log("Please try again later.");
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
So far, as long as I understand what it is doing, it is working fine. For example, if you run the function as it is, it will succsesfully connect to the url and process with the ".done(function..." if you change the url to "index273.index" a file which does not exist it will process with the ".fail(function...". I was happy with the result until I decided to test it further more and unpluged my cable out of my computer. Then when I launched the extension it returned the last result from when the browser had connection with the internet. My explanation why the function is doing this is because it is caching the url result and if it cannot connect it gives the last cached value. My next step to try and solve this was to add "cache: false" after the "crossDomain: true" property but after that when I launch the extension it gives the following error:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://enable-cors.org/index?_=1538599523573. (Reason: CORS header 'Access-Control-Allow-Origin' missing).
If someone can help me out sorting this problem I would be extremely grateful. I would want to apologise in advance for my English but this is not my native language.
PS: I am trying to implement this function in the popup menu, not into the "content_scripts" category. I am currently testing this under Firefox v62.0.3 (the latest available version when I write this post).
Best regards,
George
Maybe instead of calling the URL to check if the internet connection is available you could try using Navigator object: https://developer.mozilla.org/en-US/docs/Web/API/Navigator/connection
unless the remote server allowed origin (allowed cors) then you can't access it because it's a security issue.
But there are other things you can do:
You can load image and fire event when an image is loaded
You can access remote JSON via JSONP response
but you can't access other pages because (unless that server allows it) it's a security issue.

Why are my custom headers giving an "Adapter operation failed" error?

Aloha! Today, I'm trying to add custom headers to each request to my backend.
Playing with my DS.RESTAdapter, I already tried:
The 3 headers: solutions suggested in the official guide.
The 2 ajax: approaches proposed around there.
And 2 jQuery workarounds (based on $.ajaxPrefilter and $.ajaxSetup) that I found there.
Until now, my only result was this very obscure "Adapter operation failed" error:
{
details: "",
status: 0,
title: "The backend responded with an error"
}
I know that:
My backend behaves well and returns a 200 status (I tested sending the request via cURL).
Strangely, removing my adapter's host setting allows the request to be sent, but obviously at the wrong URL.
My problem is not a CSP issue as I'm currently running both backend & frontend locally.
According to my debugging and to my Network Inspector tab, the AJAX request is just never sent (XHR.readyStatus is stuck at 0).
Has somebody already faced this?
Any help would be really lovely!
Ember 1.13.11
Ember Data 1.13.15
jQuery 1.11.3
EDIT: Magic minimal app reproducing the bug is out here!
Hope you'll enjoy it! And because I love you so much, I also offered a demo API endpoint on my server. Details in the FM!
BONUS! Do you know what is the coolest thing to put in a clipboard?
git clone https://github.com/imbrou/ember-data-headers-demo.git
Yeeeeeeha! (-:
Usually "Adapter operation failed" error occurs because your application is having problems connecting to the backend, usually DS.RESTAdapter is not correctly setup, make sure your host and namespace are correct.
Example:
export default DS.RESTAdapter.extend({
host: 'http://193.137.170.210:8080',
namespace: '/api'
});
Solved !
My backend was not sending the correct CORS headers.
The tricky thing is that for an unknown reason, my version of Firefox (Developer Edition...) didn't display the failing OPTIONS request in my Network Inspector at the point of my debugging. I thus had no debugging information at all there.
I could only observe the failing preflight using... Wireshark !
It may have been a bug solved in a Christmas update, as I can't reproduce it today. Too bad...
Anyway, in desperation, I linked 3 screenshots:
No-preflight example: no backend security (no "authorization" token).
Working example: the "authorization" header is requested by client, and allowed by server in the response during the preflight.
Failing example: the "authorization" header is requested by the client, BUT not allowed by the server.
Hope it helps, thanks #VĂ­tor for your support !

How to do cross domain ajax in jQuery with dataType 'text'?

In my javacript function I call this ajax. It works fine but only when I access the web page from firebird server. I have the same code on my testing server. The ajax asks to download some files but only firebird server has its ip registers with our clients to be able to scp there. I need to do the same if I access the php files from testing server. All the servers are inside intranet.
is it possbile to use dataType text to do so?
do I need to do any changes on the server side?
ajax call:
url = "https://firebird"+path+"/tools.php?";
jQuery.ajax({
type: 'get',
dataType: 'text',
url: url,
data: {database: database_name, what: 'download', files: files, t: Math.random() },
success: function(data, textStatus){
document.getElementById("downloading").innerHTML+=data;
}
});
Update 1
My little web application restores databases so I can do my testing on them. Now I want to enhance it so I can connect to our customers and download a particular backup. Our customer allowed only firebird server to connect to their networks. But I have my own server dedicated to testing. So every time I want to download a database I need to connect firebird. The source of my web application and the folder with all backups are mounted into the same location on both servers firebird and testing. Right now my solution (for downloading) works but only from firebird. I work basically only testing server though.
Update 2
I make two ajax calls. One is pure jQuery call (I guess I can apply any solution to this one) and the other one is ajax call from jsTree. I created new question for that one. I seems to me that I have to go for #zzzz's option b).
To do cross domain requests, your options are fairly limited. As #Mrchief mentioned, you could do server side proxy and jsonp.
Another option is Cross-Origin Resource Sharing (CORS), a W3C working draft. Quoting from this blog post:
The basic idea behind CORS is to use custom HTTP headers to allow both
the browser and the server to know enough about each other to
determine if the request or response should succeed or fail.
For a simple request, one that uses either GET or POST with no custom
headers and whose body is text/plain, the request is sent with an
extra header called Origin. The Origin header contains the origin
(protocol, domain name, and port) of the requesting page so that the
server can easily determine whether or not it should serve a response.
You can find some live examples on this site.
You will need to make changes to the server side, to accept the CORS requests. Since you have control over the server, this shouldn't be a problem. Another downside with CORS is that, it might not be compatible with older browsers. So, if some of your essential audiences use incompatible browsers, the server side proxy may actually be a better option for you.
I just want to offer an alternative.
I am not too sure regarding your network setup, but if you have access to the DNS, maybe it would be easiest if you just give your servers some arbitrary subdomain of the same domain. Something like www.foo.com for the webfront and firebird.private.foo.com for the firebird server. This way, it becomes cross subdomain instead of cross domain. Then somewhere in your JavaScript on both pages,
document.domain = "foo.com";
This gentleman achieved this solution here.
You have the following options with you
a) You use jsonp type as your datatype but this involves making changes on the server side to pass the data back as json and not as txt.. this change might be as simple as
{
"text":<your current text json encoded>
}
and on your js side you use this as response.text; Having said that if you are getting the textis for you file from sm other domain I am not sure how easy it is for you to change the code.
b) The other option is you write a handler/end point on your server i.e within your domain that will make an HTTP request to this third domain gets the file and you send the file back to your client and effectively now your client talks to your domain only and you have control over everything. as most of yoyr questions are based on ruby here is an example:
req = Net::HTTP.get_response(URI.parse('http://www.domain.com/coupons.txt'))
#play = req.body
you can find more details about the same here.
Hope this helps.
Another idea is to use you web server as a proxy. You will need to consider the security implications for this route.

Why does this jQuery.ajax not raise an error?

We had an interesting issue this morning - the details of the issue itself aren't relevant here, and I already fixed it, but I did run into something strange, to me, about jQuery.
The site I am building internally runs on https, only, so Apache is set to redirect any inbound http request to its https equivalent. This redirect is working fine. But, I had a bug in my software where I was trying to send the following ajax request:
jQuery.ajax({ type: "PUT",
url: "http://somewhere.com/cmdt/todo_lists/8457/toggle",
data: { deployment_id: 827},
dataType: "script"});
I understand that this would fail - I'm alright with jQuery not wanting to follow a redirect. But the actual behaviour is even weirder: I never see an xhr request go out at all! And there's no javascript error! It just fails, silently. If I change the url to https, or to a relative path, it works fine, no problem. My question is, why wasn't it TRYING to send out the request before? And why didn't it raise an error?
The reason you're not getting a failure is because it's a cross-site request, and so instead of using XMLHttpRequest, it's actually generating an HTML <script> tag and dropping it into the DOM, and using that mechanism to load the file.
This works reasonably well (considering it's a complete hack around wrong-headed browser "security" notions) but there's no way for jQuery to trap errors at that point, sadly. You will likely get a browser error if you have developer mode turned on, but that's it.
If you run that from an url that's https and try to open the equivalent http page you run into cross domain problems due to the different protocols they use. Have a look at same origin policy.

Categories

Resources