I've been attempting to do some research on this topic for a while, and even cite the following Stack Overflow threads :
Javascript Hijacking - When and How Much Should I Worry
JSON Security Best Practices
But my basic problem is this.
When I am building my web applications, I use tools like Fiddler, Chrome Developer Tools, Firebug, etc. I change things on the fly to test things. I can even seem to use fiddler to change the data that gets sent to the server.
What stops someone else from just opening up my webpage and doing this too? All of the jQuery validation in the world is useless if a user can just hit F12 and open up Chrome Developer tools, and change the data being sent over the wire, right?
I'm still relatively new in this field and this just has me very concerned as I see "Open" Protocols become more and more ubiquitous. I don't understand SSL yet (which is on my list of things to begin researching), so perhaps that is the answer and I just haven't dug deep enough. But the level of flexibility I have over manipulating my pages seems very extreme - which has me very concerned about what someone malicious could do.
Your concerns are indeed justified. This is why you should always validate everything on the server. Client-side validation should only be used for UX.
JavaScript's security is, in a nutshell, based around a trusted server. If you always trust what code the server sends you, it should be safe. It's impossible for a third party (like an ad supplier) to fetch data from the domain it's included on.
If the server also sends you user generated content, and in particular user generated code, then you have a potential security problem. This is what XSS attacks focus on (running a malicious script in a trusted environment).
Client side validation should focus on easy of use, make it easy to correct mistakes or guide the user so no mistakes are made. The server should always do validation, but validation of a more strict nature.
Validation should always happen Server Side, Client Side Validation is only valuable to make for a more convenient experience for the user. You can never trust a user to not manipulate the data on their end. (Javascript is ClientSide)
Next if you are wanting to secure your service so that only user1 can edit user1's profile you'll need to sign you JSON request with OAuth (or similar protocol).
yeah nothing can stop anybody from interfering the data that is being sent from the browser to your server and that's the reason you shouldn't trust it
always check the data from the user for authenticity and validity
also with it you can check and interfere with the data that big sites like google and microsoft send back and you might get an idea.
You have to assume that the client is malicious-- using SSL does not prevent this at all. All data validation and authorization checking needs to be done server side.
Javascript isn't going to be you only line of defense against hackers, in fact it shouldn't be used for security at all. Client side code can be used to verify form input so that users trying to use the page can have faster response times, and the page runs nice. Anyone who is trying to hack your page isn't going to care if your page works or not. No matter what, everything coming into your server should be verified and never assumed as safe.
Related
I have a SignalR chat site that's meant for a school project (also uses C#). Theoretically, it is for trusted users, but as everyone will attest - never trust your users. This was proven to me as I sent out the link to a couple of my friends and they immediately tried to break it, ha ha.
I've sanitized all inputs properly now, but one thing that they were still able to do was to use the browser console tools to manually call the functions needed to send messages, etc..
Example: $.connection.chatHub.server.sendMessageToAll('FakeUser','FakeMsg',0);
I would like to prevent these types of actions. I recall a while back Facebook actually disabled the console window for "security" purposes. I even found several{1} resources{2}, which detail how this was done and attempts to further prevent console use once Chrome had fixed this.
However, none of these options work anymore and because browsers are constantly in flux, I'd rather not attempt to block at this level.
I was wondering if anyone on Stack knows of a better way to prevent these types of attacks? Is there a good way to check where the call is coming from? Does SignalR have a good method to prevent this? Ideas/Discussion would be surely welcome.
Trying to lock down the client like that might work reasonably well to prevent non-technical users from messing with your app, but it will do next to nothing against a knowledgeable and resourceful opponent. The circumstances under which such security measures make sense are rather limited, and certainly do not include any application that is accessible to everyone from the internet.
The only safe approach is well-known and very simple: the server does not trust the client for anything. It doesn't then matter what the client attempts to do as the server will refuse all actions it does not deem valid.
In your example, the server would assign a randomized opaque connection id to each session. The client would only be able to convince the server to do anything if they sent a valid id as part of their request; then, the server would not need to trust the client for a username because it would already know what connection each user has logged in from and could produce the username when given the id.
I'm trying to automate the process of getting my current student records at my college. In a browser the process involves typing in my college's URL, then clicking on the login link which then brings me to a https:// URLed page were I type my password and user-name in. Then from there it is one or two more links and reading some text on the page. Now, my question is, how might I go about do doing this but in an automated way, so my records would be displayed on the command line. The https:// in the URL signifies, I think, that it uses SSL are there certain libraries that can handle this? Also the 'submit' button on the login page I'm pretty sure uses JavaScript, again, are there libraries to handle this?
I'm sure I missed something or other in my question's description, so please ask if you do not understand my question or need more information.
PS. I am not well versed in Internet protocols and I am also new to Python. In fact I started studying it for this project. But, I am fluent in C and I am pretty good with C++.
Thanks in advance.
Michael,
You don't have to mimic all the actions you do in the browser.
First. There is no problem with https/ssl as long as you don't have to verify them (it seems that you don't have to), urllib2.urlopen will handle them.
Second. When you click 'Submit' browser sends a request to the server with your username, password and probably some other data. The type of that request is probably POST. As a response server will probably send you a cookie with session id. So all you need to do is to investigate the exact format of request to the server (e.g. using FireBug), and get the cookie from the server's response.
Third. Just use that cookie to navigate the pages on the site. This might help.
P.S. As you see, there is too much 'probably' word in the answer - the exact authentication process may differ from described above and you'd have to investigate it by yourself.
Roman's answer is good advice: you generally don't need to act like a real user when your script can call HTTP methods directly.
However, if you are not comfortable with reverse engineering the HTTP operations that the site requires, then an alternative would be to use Selenium, a tool for simulating interaction with web pages. Selenium is usually used by web application developers to test their applications, but it can also be used as an automatable client for an existing website.
We have a form that absolutely requires JavaScript to function, and validation is done client side. Validation is also performed server side, but it would be an extreme amount of work to get it to show errors when server side verification fails.
Since there is no chance for the user to not have JavaScript, is it OK to just fail with an HTTP error? The only way they would fail server side verification was if they either are a malicious user, or can't use JavaScript, in which case they wouldn't be using the form anyway.
Thanks
I say this is fine, except for a certain class of errors.
Some validation errors are not a result of malice but simply cannot be checked and discovered at any other time than when the form is actually processed. This can be because of a scarce resource that needs to be reserved but cannot be ("this username is already in use"), or because of some server-side recoverable error ("The upstream Credit Card processor is not responding. Please try again later"). For these kinds of errors, you absolutely should have some kind of error message communicated back to the user. It's hard to envision a design where sending these kinds of errors back would not be possible. At the very least you can do this:
Send your HTTP error response (4xx or 5xx depending on the nature of the error)
In the body of your response, package an error message in some data structure your javascript can understand easily. (JSON or XML, or even text/plain! Remember to set the mime type.)
Have the error-handler for the javascript request insert the text of the error at a visible place in your form (e.g. at the top or near the submit button).
The most important thing, however, is to have server-side validation and not trust the client. You are already doing this, so if you want to do anything further it is a matter of polish and making for the best possible user experience. Sometimes that requires a disproportionate amount of effort and that's ok.
I personally think it is fine.
Especially if a previous step in using the site would also not work at all without Javascript, so the user couldn't have proceeded to the page in question without Javascript, then going to the huge expense of making it work without Javascript is wasted effort. For example, in .Net webforms, logging in requires Javascript, so any pages inside the secured area of the site can, to my mind, assume Javascript is available.
I'm curious what other people think, though.
If the only expected use case for failing the http request is when someone has bypassed the browser, then just failing the http request seems perfectly fine. You aren't impacting any expected user scenario, yet you are still protecting the server-side integrity with server-side validation. There's no point in doing more work than that. Seems fine to me.
What would make it worth it to do more work to show error UI from the server would be if there are actual legitimate user scenarios where bad data would get through to the server. But, since you think that is unlikely or impossible for a legitimate scenario to go there, then there's no reason do do that extra work.
It is ok so long as you are certain that the client side validation and server-side validation are equivalent. On that note, I find it hassle some to keep the client-side validation code and server-side validation code in sync (Especially if they are written in different languages, which is always the case if you are not using node.js or GWt). If anyone has any solution that it would be great.
However, if there are certain validation that can only be performed on server-side (database uniqueness constraint, for instance), then it is important to show user that their client-side actions have failed. That depends on the application itself though.
In a previous question I asked about weaknesses in my own security layer concept... It relies on JavaScript cryptography functions and thanks to the answers now the striking point is clear that everything that is done in Javascript can be manipulated and can not be trusted...
The problem now is - I still need to use those, even if I rely on SSL for transmission...
So I want to ask - is there a way that the server can check that the site is using the "correct" javascript from the server?
Anything that comes to my mind (like hashing etc.) can be obviously faked... and the server doesn't seem to have any possibility to know whats going on at the clients side after it sent it some data, expept by HTTP headers (-> cookie exchange and stuff)
It is completely impossible for the server to verify this.
All interactions between the Javascript and the server come directly from the Javascript.
Therefore, malicious Javascript can do anything your benign Javascript can do.
By using SSL, you can make it difficult or impossible for malicious Javascript to enter your page in the first place (as long as you trust the browser and its addons), but once it gets a foothold in your page, you're hosed.
Basically, if the attacker has physical (or scriptual) access to the browser, you can no longer trust anything.
This problem doesn't really have anything to do with javascript. It's simply not possible for any server application (web or otherwise) to ensure that processing on a client machine was performed by known/trusted code. The use of javascript in web applications makes tampering relatively trivial, but you would have exactly the same problem if you were distributing compiled code.
Everything a server receives from a client is data, and there is no way to ensure that it is your expected client code that is sending that data. Any part of the data that you might use to identify your expected client can be created just as easily by a substitute client.
If you're concern is substitution of the client code via a man-in-the-middle attack, loading the javascript over https is pretty much your best bet. However, there is nothing that will protect you against direct substitution of the client code on the client machine itself.
Never assume that clients are using the client software you wrote. It's an impossible problem and any solutions you devise will only slow and not prevent attacks.
You may be able to authenticate users but you will never be able to reliably authenticate what software they are using. A corollary to this is to never trust data that clients provide. Some attacks, for example Cross-Site Request Forgery (CSRF), require us to not even trust that the authenticated user even meant to provide the data.
We have a heavy Ajax dependent application. What are the good ways of making it sure that the request to server side scripts are not coming through standalone programs and are through an actual user sitting on a browser
There aren't any really.
Any request sent through a browser can be faked up by standalone programs.
At the end of the day does it really matter? If you're worried then make sure requests are authenticated and authorised and your authentication process is good (remember Ajax sends browser cookies - so your "normal" authentication will work just fine). Just remember that, of course, standalone programs can authenticate too.
What are the good ways of making it sure that the request to server side scripts are not coming through standalone programs and are through an actual user sitting on a browser
There are no ways. A browser is indistinguishable from a standalone program; a browser can be automated.
You can't trust any input from the client side. If you are relying on client-side co-operation for any security purpose, you're doomed.
There isn't a way to automatically block "non browser user" requests hitting your server side scripts, but there are ways to identify which scripts have been triggered by your application and which haven't.
This is usually done using something called "crumbs". The basic idea is that the page making the AJAX request should generate (server side) a unique token (which is typically a hash of unix timestamp + salt + secret). This token and timestamp should be passed as parameters to the AJAX request. The AJAX handler script will first check this token (and the validity of the unix timestamp e.g. if it falls within 5 minutes of the token timestamp). If the token checks out, you can then proceed to fulfill this request. Usually, this token generation + checking can be coded up as an Apache module so that it is triggered automatically and is separate from the application logic.
Fraudulent scripts won't be able to generate valid tokens (unless they figure out your algorithm) and so you can safely ignore them.
Keep in mind that storing a token in the session is also another way, but that won't buy any more security than your site's authentication system.
I'm not sure what you are worried about. From where I sit I can see three things your question can be related to:
First, you may want to prevent unauthorized users from making a valid request. This is resolve by using the browser's cookie to store a session ID. The session ID needs to tied to the user, be regenerated every time the user goes through the login process and must have an inactivity timeout. Anybody request coming in without a valid session ID you simply reject.
Second, you may want to prevent a third party from doing a replay attacks against your site (i.e. sniffing an inocent user's traffic and then sending the same calls over). The easy solution is to go over https for this. The SSL layer will prevent somebody from replaying any part of the traffic. This comes at a cost on the server side so you want to make sure that you really cannot take that risk.
Third, you may want to prevent somebody from using your API (that's what AJAX calls are in the end) to implement his own client to your site. For this there is very little you can do. You can always look for the appropriate User-Agent but that's easy to fake and will be probably the first thing somebody trying to use your API will think of. You can always implement some statistics, for example looking at the average AJAX requests per minute on a per user basis and see if some user are way above your average. It's hard to implement and it's only usefull if you are trying to prevent automated clients reacting faster than human can.
Is Safari a webbrowser for you?
If it is, the same engine you got in many applications, just to say those using QT QWebKit libraries. So I would say, no way to recognize it.
User can forge any request one wants - faking the headers like UserAgent any they like...
One question: why would you want to do what you ask for? What's the diffrence for you if they request from browser or from anythning else?
Can't think of one reason you'd call "security" here.
If you still want to do this, for whatever reason, think about making your own application, with a browser embedded. It could somehow authenticate to the application in every request - then you'd only send a valid responses to your application's browser.
User would still be able to reverse engineer the application though.
Interesting question.
What about browsers embedded in applications? Would you mind those?
You can probably think of a way of "proving" that a request comes from a browser, but it will ultimately be heuristic. The line between browser and application is blurry (e.g. embedded browser) and you'd always run the risk of rejecting users from unexpected browsers (or unexpected versions thereof).
As been mentioned before there is no way of accomplishing this... But there is a thing to note, useful for preventing against CSRF attacks that target the specific AJAX functionality; like setting a custom header with help of the AJAX object, and verifying that header on the server side.
And if in the value of that header, you set a random (one time use) token you can prevent automated attacks.