I have found the practice of consuming webservices on the client quite uncommon and have a query in this regard. Is it bad practice to consume webservices on the client end? Does exposing the webservice put your application at risk in anyway. What is the main motive behind calling the webservices on the server and not client, because logic dictates that the number of calls to the server would become much smaller and the whole process would move a lot faster?
Thanks
Shouvik
PS:
I am not sure contrary to what I believe is widely practiced and if so then I may be completely wrong in my notion. Since I could not find any real article on googling I ask this question.
it completely depends on the nature of the webservice and what you do with them, if the webservice is open and doesn't require authentication or certificate validation, then you can obviously load it from the client side.
incase if the web service exposes some critical information which you do not want to expose to the end user, its a practice to load it on the server.
incase if you want to do a business logic on the data returned by the webservice and dont want to expose the logic to the external world, you can do it on the server.
i would say it completely depends on the type of the web service and what you are doing with the webservice.
for ex: if its a weather webservice which is open, no authentication etc, i dont see any value in having it on the server except you want to increase the load on your server
Go over this Sun Link
Totally depends on web-service type you want to use there.
This might help you in further development.
Related
We are creating an online service divided like that:
- an API, of course
- full JS/AJAX client, no MVC, it is pure JS
We are experienced developers and we do know that we can't secure the JS client code, however, we are trying to figure way to prevent 3rd parties from creating their own client by analyzing our JS API Call and this way restrict access only from our own client.
Thanks in advance!
We are experienced developers and we do know that we can't secure the
JS client code, however, we are trying to figure way to prevent 3rd
parties from creating their own client by analyzing our JS API Call
and this way only restreint access from our own client.
That is contradiction in terms. If you know that client-side ECMAscript code can never be hidden, it will always be possible for any somewhat experienced developer to analyse your code. Even if heavily obfuscated, minified and uglified.
Use a server-side authentication, by password. Its the only secure way. You just can not prevent that somebody will clone/copy your script.
I don't think you can. Perhaps generate a key or something to authorize requests.
For you and anyone with a similar question, take heed; it is impossible. If you send a user working code that will communicate with your API, there is nothing you can do to stop then modifying or re-writing that code. The only area you can keep secure is the back-end.
Oh, this is the wrong question to ask.
The question you need to ask is "why do I care if someone accesses my server without my client?"
You obviously have a reason. I can think of one reason only - your server trusts the client to behave nicely. Don't do that. Make sure the server can handle any kind of zany client request. It doesn't have to handle it nicely (throwing a 500 Server Error is OK) - as long as rogue clients can't mess with your data or kill your server entirely.
You could try to obfuscate your javascript code to make it hard readable:
a link to an obfuscator
you can find outhers
If you have authentification, you can pass session id to your API to keep user logged in, so if user is not authentificated he won't be able to get data from your API.
This is probably a generic security question, but I thought I'd ask in the realm of what I'm developing.
The scenario is: A web service (WCF Web Api) that uses an API Key to validate and tell me who the user is, and a mix of jQuery and application on the front ends.
On the one hand, the traffic can be https so it cannot be inspected, but if I use the same key per user (say a guid), and I am using it in both then there's the chance it could be taken and someone could impersonate the user.
If I implement something akin to OAuth, then a user and a per-app key is generated, and that could work - but still for the jQuery side I would need the app API key in the javascript.
This would only be a problem if someone was on the actual computer and did a view-source.
What should I do?
md5 or encrypt the key somehow?
Put the key in a session variable, then when using ajax retrieve it?
Get over it, it's not that big a deal/problem.
I'm sure it's probably a common problem - so any pointers would be welcome.
To make this clearer - this is my API I have written that I am querying against, not a google, etc. So I can do per session tokens, etc, I'm just trying to work out the best way to secure the client side tokens/keys that I would use.
I'm being a bit overly cautious here, but just using this to learn.
(I suggest tagging this post "security".)
First, you should be clear about what you're protecting against. Can you trust the client at all? A crafty user could stick a Greasemonkey script on your page and call exactly the code that your UI calls to send requests. Hiding everything in a Javascript closure only means you need a debugger; it doesn't make an attack impossible. Firebug can trace HTTPS requests. Also consider a compromised client: is there a keylogger installed? Is the entire system secretly running virtualized so that an attacker can inspect any part of memory at any time at their leisure? Security when you're as exposed as a webapp is is really tricky.
Nonetheless, here are a few things for you to consider:
Consider not actually using keys but rather HMAC hashes of, e.g., a token you give immediately upon authentication.
DOM storage can be a bit harder to poke at than cookies.
Have a look at Google's implementation of OAuth 2 for an example security model. Basically you use tokens that are only valid for a limited time (and perhaps for a single IP address). That way even if the token is intercepted or cloned, it's only valid for a short length of time. Of course you need to be careful about what you do when the token runs out; could an attacker just do the same thing your code does and get a new valid token?
Don't neglect server-side security: even if your client should have checked before submitting the request, check again on the server if the user actually has permission to do what they're asking. In fact, this advice may obviate most of the above.
It depends on how the API key is used. API keys like that provided by Google are tied to the URL of the site originating the request; if you try and use the key on a site with an alternate URL then the service throws and error thus removing the need to protect the key on the client side.
Some basic API's however are tied to a client and can be used across multiple domains, so in this instance I have previously gone with the practice of wrapping this API in server side code and placing some restrictions on how the client can communicate with the local service and protecting the service.
My overall recommendation however would be to apply restrictions on the Web API around how keys can be used and thus removes the complications and necessity of trying to protect them on the client.
How about using jQuery to call server side code that handles communication with the API. If you are using MVC you can call a controller action that can contain the code and API key to hit your service and return a partial view (or even JSON) to your UX. If you are using web forms you could create an aspx page that will do the API communication in the code behind and then write content to the response stream for your UX to consume. Then your UX code can just contain some $.post() or $.load() calls to your server side code and both your API key and endpoint would be protected.
Generally in cases like this though you proxy requests through the server using 'AJAX' which verifies the browser making requests is authorized to do so. If you want to call the service directly from JavaScript, then you need some kind of token system like JSON Web Tokens (JWT) and you'll have to work out cross-domain issues if the service is located somewhere other than the current domain.
see http://blogs.msdn.com/b/rjacobs/archive/2010/06/14/how-to-do-api-key-verification-for-rest-services-in-net-4.aspx for more information
(How to do API Key Verification for REST Services in .NET 4)
Does anybody know of a way of checking on the API side if a XMLHttpRequest has been made from my own web-application (ie. from the JS I have written) or from a third-party application...
The problem, to me, seems to be that because the JS is run on the client and thus accessible to anyone I have no way of secretly communicating to the API server who I am. I think this is useful because otherwise I cannot prioritize requests from my own application over third-party clients in case of high usage.
I could obviously send some non-documented parameters but these can be spoofed.
Anybody with some ideas?
I would have your web server application generate a token that it would pass to your clients either in JavaScript or a hidden field which they in turn would use to call your API. Those with valid tokens get priority, missing or invalid tokes wouldn't. The web server application can create & register the token in your system in a way that limits its usefulness to others trying to reuse it (e.g., time limited).
If you do approve of third party clients accessing your API, perhaps you could provide them with a slightly different, rate-limited interface and document it well (so that it would be easier to use and thus actually be used by third-party clients).
One way to do this would be to have two different API URLs, for example:
/api?client=ThirdPartyAppName&... for third-party apps (you would encourage use of this URL)
/api?token=<number generated from hidden fields from the HTML page using obfuscated code>&... for your own JS
Note that as you mention, it is not possible to put a complete stop to reverse engineering of your own code. Although it can take longer, even compiled, binary code written in such languages as C++ can be reverse engineered, and that threatens any approach relying on secrecy.
A couple of ideas come to mind. I understand that secrets never last, so I agree that's not a good option.
You could run another instance on a different unadvertised port
You could do it over SSL and use certs to identify the client
A simple but less secure way would be to use cookies
You could go by IP address, but that could be an administrative nightmare
In a previous question I asked about weaknesses in my own security layer concept... It relies on JavaScript cryptography functions and thanks to the answers now the striking point is clear that everything that is done in Javascript can be manipulated and can not be trusted...
The problem now is - I still need to use those, even if I rely on SSL for transmission...
So I want to ask - is there a way that the server can check that the site is using the "correct" javascript from the server?
Anything that comes to my mind (like hashing etc.) can be obviously faked... and the server doesn't seem to have any possibility to know whats going on at the clients side after it sent it some data, expept by HTTP headers (-> cookie exchange and stuff)
It is completely impossible for the server to verify this.
All interactions between the Javascript and the server come directly from the Javascript.
Therefore, malicious Javascript can do anything your benign Javascript can do.
By using SSL, you can make it difficult or impossible for malicious Javascript to enter your page in the first place (as long as you trust the browser and its addons), but once it gets a foothold in your page, you're hosed.
Basically, if the attacker has physical (or scriptual) access to the browser, you can no longer trust anything.
This problem doesn't really have anything to do with javascript. It's simply not possible for any server application (web or otherwise) to ensure that processing on a client machine was performed by known/trusted code. The use of javascript in web applications makes tampering relatively trivial, but you would have exactly the same problem if you were distributing compiled code.
Everything a server receives from a client is data, and there is no way to ensure that it is your expected client code that is sending that data. Any part of the data that you might use to identify your expected client can be created just as easily by a substitute client.
If you're concern is substitution of the client code via a man-in-the-middle attack, loading the javascript over https is pretty much your best bet. However, there is nothing that will protect you against direct substitution of the client code on the client machine itself.
Never assume that clients are using the client software you wrote. It's an impossible problem and any solutions you devise will only slow and not prevent attacks.
You may be able to authenticate users but you will never be able to reliably authenticate what software they are using. A corollary to this is to never trust data that clients provide. Some attacks, for example Cross-Site Request Forgery (CSRF), require us to not even trust that the authenticated user even meant to provide the data.
I am currently having an idea where I want to save an image from a c++/openGL application on demand from a browser. So basically I would like to run the application itself on the server and have a simple communication layer like this:
JS -> tell application to do calculations (and maybe pass a string or some simple data)
application -> tell JS when finished and maybe send a link, text or something as simple as that.
I don't really have alot of experience with webservers and as such don't know if that is possible at all (it's just my naive thinking). And note: I am not talking about a webGL application, I just want to have simple communication between a c++ serverside application, and the user.
Any ideas how to do that?
Thanks alot!
Basically no matter what your language/framework you choose for your web server, you just need a interface that is callable from your browser JS, and you can do whatever you want in the server once it recieves the call.
Most likely any web service interface exposed from the server.
Just need to safeguard your server not to get DoS since it sounds like it's a huge process.
As far as I know, JavaScript (at least when embedded in HTML) is executed on your local machine and not on the server so that there is IMHO no way to directly start your server-application using JS.
PHP for example is executed on the server-side and so you could use e.g. the php system function to call your C++/OpenGL application on the server - initiated on demand through a web-browser.
When the call is finished you could then directly present the image.
Well you could always use the cgi interface to invoke your application
and have it save that image somewhere accessible to the webserver.
Then have your js load that via ajax.
Or make a cgi App that talks to the app and then serves a small
page with the pic in it.
[EDIT]
Answering the comments:
CGI is not complex to learn, it is mostly a simple convention
you can follow. I think it would give you the maximum of
flexibility. I don't know which php mods allow you to leave the cozy protection of the server-application and interact with other stuff on your server.