Changing URL of XmlHttpRequest results in cookies not being sent - javascript

I've got an internal web server that presents an interface to let me view and change some data on that server. When I am in one location I can access the server directly, but from another location I have to create an ssh tunnel to the server. As a result, the URL I put in my browser changes depending on my location: for instance, http://myserver/blah versus http://localhost:8000/blah. It's the same server, just a different host name.
This is inconvenient because occasionally I will forget to save my changes in one location and when I go to the other location the server is suddenly not found. It's also inconvenient because I keep having to reload the page. I would like to just load the page once and have it work in either place. So, I thought I would add some code in my XmlHttpRequest handling to detect if the server is not found and re-issue the request using the alternate server address. The problem is, when this happens I find that my cookies are not sent to the server.
I've got cookies for both localhost and myserver. They are the really the same set of values because it's really the same server, but they are duplicated obviously because the server is accessed from two different host names. If I manually change the host name of the server I have no problem, but obviously this is what I am trying to avoid having to do.
I suspect that perhaps there is some security issue, but after re-reading how cookies work I can't figure out specifically might be tripping this behavior, or how to fix it.
By the way, the problem is NOT that I am trying to do a cross-site request. I explicitly allow this on the server side by returning the field "Access-Control-Allow-Origin:*" and I have had no problem with this part of the request. With firebug I can see the problem is that when the request is re-issued to the new host name, no cookies are sent, even though cookies exist for that host name.

Related

Reading a cookie from a different domain

I'm developing a page/form for a campaign inside my company. However, the first step is to check if the person is logged in. This is easily checked against a cookie - CUSTOMER - that is set once they're logged in.
However:
1) I'm developing locally, not on the same domain, and, as a result can't see that cookie
2) The final campaign may or may not end up residing on the actual domain. They may end up using a vanity URL or something.
For purposes of this, let's assume I do NOT have access to the main domain where the cookie was set.
How can I read that cookie from off the domain? Oh, and since IT folks don't let us touch the back-end grumble, it has to be a JS solution.
Thanks!
You can't.
The only cookies you can read with client side JavaScript are those belonging to the host of the HTML document in which the <script> is embedded.
By setting withCredentials you can support cookies in cross-origin requests, but they are handled transparently by the browser and JS has no direct access to them (the XHR spec goes to far as to explicitly ban getAllResponseHeaders from reading cookie related headers). The only way for a cross-origin request to get access to cookies is for the server (which you say you don't have access to) to copy the data into the body or a different response header).
You can if you can install server side components.
You can use a dedicated domain to host your cookie and then share it using XSS technics
When dom1.foo.com logs in then you register a cookie on cookie.foo.com using an Ajax XSS call then when you go on dom2.foo.com you have to query cookie.foo.com with your XSS api
I' ve played with it some time ago
https://github.com/quazardous/mudoco/blob/master/mudoco/README.txt
It's just some sort of POC..

Differenciate Between User Requests and AJAX/Resource Requests

I'm attempting to create an app with Node.js (using http.createServer()) which will be a single page application with requests for data via XMLHttpRequest. To do this I need to be able to differentiate between a user navigating to my domain, and AJAX requests and requests generated by the browser for linked resources.
If the request is from the user I always want to return the index.html page which will handle requesting content but if the request is browser generated or AJAX and is for CSS, Javascript or other linked files I want to serve those files. Is there any way to detect this?
Looking at the request headers for the different file types I saw the referer header appeared when the request for content was generated by the page. I figured that was the solution I was looking for but that header is also set when a user clicks on a link to the page making it useless.
The only other thing which seems to change is the accept header which could sort of work but might not be a catch all solution. Any user requests always seem to have text/html as the preferred return type regardless of which url was entered. I could detect that but I'm pretty sure AJAX requests for html files would also have that accept header which would cause problems.
Is there anything I'm missing here (any headers or properties I can look for)?
Edit: I do not need the solution to protect files and I don't care about users bypassing it with their own requests. My intention is not to hide files or make them secure, but rather to keep any data that is requested within the scope of the app.
For example, if a user navigates to http://example.com/images/someimage.jpg they are instead shown the index.html file which can then show the image in a richer context and include all of the links and functionality to go with it.
TL/DR: I need to detect when someone is trying to access the app to then serve them the index page and have that send them the content they want. I also need to detect when the browser has requested resources (JS, CSS, HTML, images, etc) needed by the app to be able to actually return the resource not the index file.
In terms of HTTP protocol there are NO difference between a user-generated-query and a browser-generated-query.
Every query is just... a query.
You can make a query with a command line, with a browser, you can click a link, send some ascii text via telnet, request a proxy which will make the query for you, the server goal is never to identify how the query was requested by the user.
See for example a request made by a user on a reverse proxy cache, this query will never reach your server (response comes from the cache), the first query made to build this response could have been made by a real user or by a browser.
In terms of security trying to control that the user is never requesting data by-himself cannot be done by detecting that the query is a real human click (and search google for clickjacking if you want to be afraid). Every query that a browser can make can also be played by the user, every one, you have no way to prevent that.
Some browsers plugins are even doing pre-fetching, detecting links on the page and making the request before you do it yourself (if it's a GET query).
For ajax, some libraries like JQuery will add an X-Requested-With: XMLHttpRequest header, and this is used on most framework to detect ajax mode.
But it is more robust to depend on a location policy for that (like making your ajax queries with a /format/ajax, which could also be used on other ways (like /format/json, /format/html, or /format/csv).
Spending time on a location policy based routing is certainly more usefull.
But one thing can make a difference, POST queries are not indempotent, it means the browser cannot make a POST query without a real user interaction, because a POST query may alter the state of the session or the state of the server data (but js can make POST queries, this is just a default behavior of browsers). The browser will never automatically retrieve a POST query, so you could make a website where all users interactions are POST queries (via forms or via some js altering link clicks to send POST ajax queries instead). But I'm not that's your real goal.
Not technically an answer to the question but I found a simple solution which does what I want: prefix all app based requests with a subdomain eg. http://data.example.com/. It's then really simple to check the host header for that subdomain: if present send the resource else send the index page.

Chrome extension for blocking websites based on database blacklist

We have a database with millions of domain categorizations (storing it client side is not an option) and we want to make a chrome extension to blacklist sites based on how they are categorized in the Mysql database.
The server side stuff is easy, we post the domain, and return the category.
The tricky part is blocking requests based on the categorization. Here are a few potential implementations and why they won't (quite) work.
Idea 1:
Redirect all traffic using Chrome.webRequest to mysite.com/script.php?url=www.theoriginalurl
This script checks the database's category & either redirects them to the theoriginalurl.com or denies the request, redirecting them to www.youGotBlocked...
Have the chrome extension check the http referrer header to make sure that they came from mysite.com (unless the url is mysite.com, then do nothing).
Problems:
It doesn't seem like we can set the referrer header in PHP, so we have no way of knowing that they came from mysite.com. It seems like maybe we should be passing info via a cookie, but I haven't thought of an elegant solution involving cookies.
Idea 2:
Every time Chrome.webRequest fires make an AJAX POST request to mysite.com/categorizeURL.php with the URL to get the category. Block or allow based on the server's response.
Problems:
Either we make the request asynchronous and we can't get the response in time (their is no way that we have found to delay the callback until the server responds -- more on that here). Or we make the request synchronous, and IT WORKS!!! Except for the fact that if they can't reach our server, their entire browser locks up and they essentially need to refresh the extension to be able to access the internet again.
Other ideas?
Does anyone have other ideas for creating a blacklist via a Chrome extension? I simply refuse to believe that it is not possible.

How to secure the source code of a game for being used only on allowed domains?

I would like to only allow my game to work on some domains. The build version of the javascript by default will work everywhere and is minified and uglified. What might I do in order to "break" the game if used out of the allowed domains?
I was thinking of something that reads the domain name and based on that will break it. But this is easy to trick, just change all instances of the places where I request the domain name and put one of the allowed.
Another one would be to request on my custom service little bits of data. Imagine I'm on the allowed domain and I request a bit of data that varies with the timestap I provide. Also the response will be based on the allowed list of domains. If the source domain in the ajax/post request is allowed, then is sent a "right" bit, if not, it will "break" the game; This would happen every once in a while and within the game.
What do you think? is easily crackable?
In general, javascript (or any client side language) is not the correct place to put security or licensing related code, as it can be easily circumvented by modifying the javascript. Minifying the Javascript will make it harder/slower to modify but will not prevent it.
If there is some server side language involved, then you may be able to investigate a server side licensing solution, but generally server side scripts can also be modified or decompiled by anyone with access to the server.
Another option may be to host the bulk of code on your own server, and any server that wants to use your game would need to send your server a license key via a server to server request to your server, that way the license key is kept private and only your server and the hosting server know it, your server would then respond with a session token which the client may then use to get access to your game. As the license key would be kept private it would make it harder for 3rd parties to intercept it, and without it they wont be able to get a session token. But this only works if there is a server side language involved, if it is all done in javascript then this wont be of much use.
Modify the server so that it returns different versions of the script depending on the subnet address of the client. This way, there is no client dependency on valid (or any) DNS, and the server completely controls the authorisation process.
The client will then receive a version of the application that can then report an error and terminate.

Cross Domain AJAX/Javascript - Artificially using a sessionid

I currently have a RESTful webservice which recognises a client via it's session.
I have a client which uses ajax/javascript to access the contents of the RESTful webservice. I allow this to happen by responding to the request with the headers: Access-Control-Allow-Origin, Access-Control-Allow-Credentials, Access-Control-Allow-Methods.
However, although the client can access the content's each request is regarded as a different session as cookies cannot be used across domain.
I don't want to modify my server code to cater specifically for this style of client, i would prefer a work around client side to give a facade of using a session.
Since I don't want to store anything through the session, rather I only use the jsessionid as the client identifier I assumed I could artifically inject &jsessionid= to the URL to, at least from the server side, make the client seem to be correctly keeping track of the session.
This doesn't seem to work - can someone advise on how I can make my client act as if it is using the same sessionid?
...I assumed I could artifically inject &jsessionid= to the URL...
jsessionid isn't a query string parameter. You'd want to artificially add ;jsessionid=... (prior to any & in the URL), rather than &jsessionid=....
For background...I made a product called kitgui.com which allows for cross domain communication and simulates on-page saving for content management but actually is talking cross domain through an iframe to a secure server.
You don't have to modify your server code. You can use iframe + postMessage assuming you don't need support for below IE8. All the other modern browsers support that. There is also iframe polling technique as well for lower browsers. You don't need to expose your session id across querystring on non SSL either. You can talk to your iframe to get the state of being logged in or not via javascript. The session info remains on the iframe's domain where it should be.
This link can help you -> http://benalman.com/projects/jquery-postmessage-plugin/

Categories

Resources