Prevent unauthorized access to a webpage using jquery, javascript - javascript

Say, a link to a person is sent to a user via email. If the person is already logged into the webpage in his/her browser, clicking on the link takes him/her to the page. However, if he/she is not logged in, he/she should be asked to login in order to access the page. Is there a way to achieve the above functionality using jquery, javascript?

Yes. Build a back-end authentication system, using AJAX and whatever your server-side language is.
From there, develop a hypermedia-style of content-system, and a modular, "widget"-based application delivery model.
Within your hypermedia responses to login (plus passing whatever relevant path information was gained from the e-mail), either redirect the page to a new page (based on the linked response from the server), or download the widgets requested from the server (for whatever application you're displaying media in), and then stream in AJAX content (again, from a URL dictated by the server-response).
This is about as close as you're going to get to security, in terms of delivering things to the client, in real-time, with authentication.
If you were to load the reports/gallery/game/whatever, and put a div over it, and ask for users to log in, then smart users can just kill the div.
If you include the content, or include the application components (JS files), or even include the links to the JS files which will request and display the content, then clever people are again going to disassemble that, in 20 seconds, flat.
The only way I can see to do this is to have a common request-point, to touch the server, and conditionally load your application, based on "next-steps" URLs, passed to the client, based on successful authorization and/or successfully completing whatever the previous step was, plus doing authentication of some form on each request (REST-based tokens+nonces, or otherwise)...
This would keep the content (and any application-structure which might have vulnerabilities) from the client, until you can guarantee that the client has been properly authorized, and the entire application is running inside of multiple enclosed/sandboxed modules, with no direct access to one another, and only instance-based access to a shared-library.
Is it worth the work?
Who knows.
Are we talking about a NORAD nuclear-launch iPhone app, which must run in JavaScript?
Then no, engineering this whole thing for the next six months isn't overboard.
And again, all of this security falls over as soon as one person leaves themselves logged-in, and leaves their phone on the table (biometric authentication as well, then?).
Are we talking about a gallery or discount-offers that you want to prevent people to log into, so you know that only the invited people are using them?
Well, then an 18-month project to engineer, develop, debug and deploy a system like this is probably going to be overkill.
In this case, perhaps you can just do your best to prevent the average person from stealing your content or using your cut-prices, and accept that people who take the time to dig into and reverse-engineer everything are going to find a way to get what they want, 95 times out of 100.
In that case, perhaps just putting a login div overtop of the page IS what you're going to be looking for...
If you're dealing with, say a company back-end, or with company fiscals or end-user, private-data, or anything of the sort, then aside from meeting legal requirements for collection/display/storage, how much extra work you put into the security of the system depends on how much your company's willing to pay to do it.
If it makes you feel better, there are companies out there that pay $60,000-$150,000 a year, to use JS tracking/testing programs from Adobe. Those programs sit right there, on the webpage, most of the time, for anybody to see, as long as you know where to look.
So this isn't exactly an unknown problem.

Yes it is. By authenticating (login) you can store a "loggedIn" cookie which you have to delete by session end (logout or closing the browser). You can use that cookie to check if somebody is logged in or not. If not logged in, than you can display the login page and send the login request with ajax. Btw it is not a good practice to use hybrid applications like that. It is better to use SPA-s with REST service, or implement this on server side.

Related

How to stop users to manipulate the popup and at the same time let googlebot crawl my page

I have a very confusing problem.
I have a page which only allow paid users to view it. So if the user is not valid I use a pop up with grey backgroud to block users to view the page however there is a potential flaw with this and if a user is clever he can find a workaround and by using the inspect element bypass the popup. Another solution which comes to my mind is to redirect the user to another page instead of pop up like:
window.location = "http://www.example.com";
However there is a potential problem with this or may be I am wrong on this:
I think this way google bots wont be able to crawl that page since redirection happens however in the first approach google will definitely be able to crawl the page.
Now my question is if I use the first approach is there anyway to stop user from manipulating the popup or is there anyway I can distinguish if a user is browsing the page or google?
Also if I use the second approach will google bot be able to crawl the page?
You can't implement a paid block or any types of truly secure/working blocking on the frontend. I would suggest prevent accessing to that said page on the backend.
There's no real clean and 100% working way to this on the frontend. The user can always bypass.
For google, it will be able to crawl the page since the content is still accessible via the rendered html, as it does not care how the page is shown. It gets access to the content anyway, just like you would by fetching the html via a get request without a browser.
You could indeed just redirect, but still do it on the backend not the frontend.
Your current solution does not make the page private - as you rightly point anyone can manipulate the page using the dev tools, and crawlers can read the whole source anyway. Using server-side scripts to block access, and/or vary the content based on an authorisation token is the only way to secure it properly and ensure that only your legitimate paying users get privileged access.
You state a concern about the inability for Google (and other search engines, I assume) to crawl the page if you employ better security. But your logic is flawed: If you make it so that a google bot can still crawl the page, then by definition it must be readable without authorisation. Anyone could view it in the google cache, and parts of its content could show up in google searches. This means it isn't private. Once that's the case, then what are your users paying for, exactly?
What you might realistically want to do is have a cut-down version of the page that is displayed when the user is not authorised, containing enough information for search engines to get an idea of the overall content, and for visitors to be tempted into paying for the rest. Then if the user logs in, the server recognises that and displays the rest of the content as well when the page refreshes. That appears to be roughly what paid-content news sites do, for instance.

Block current users ip using javascript

I have a website where your able to advertise things on my website. The problem is that people are able to do it more than once. Is there a way that people are allowed to visit the website and when they join back they will be redirected to another page saying you have already advertised. People are still able to use vpn's but i have a way to stop that.
How can i use javascript or php to record the users ip first when the visit the website, But if they leave the website or reload the page they will be redirected to another page saying you have already advertised. Is this to much work?
Technically yes, you could use JS and PHP to grab a user's IP address and work with it in a database but proxies and dynamic IPs would make it a very easy check to circumvent. You can also use PHP to create a persistent cookie to identify the user and his/her actions and see if you're getting a returning visitor who posted an ad, but cookies can easily be deleted.
So it's not that what you're trying to do is too much work, it's that it's fairly easily circumvented and not very reliable. Your best bet is an authentication system that requires a valid login to post an ad, logging what the advertisers do, and creating logic which will disallow spammy behavior based on your logs.
You won't be able to stop abuses by very, very determined users but you can make it harder and make them think twice about whether it's worth investing all that time and effort into spamming on your site when there are bound to be much softer targets, giving you the time to deal with the most egregious cases personally instead of trying to stop a torrent of spammy ads.
You cannot stop people doing that 100% for sure.
if you block their IPs they use proxy.
if you use session they change their browsers or reset it to default.
if you block their hardware like in facebook block hard disk serial again they use vpn servers.
if ..
there is no way bro.
Ask for paying instead of making it for free.

preventing fraud by visitors using firebug or other consoles

I am trying to prevent fraud in a webproject I am building.
The project is a game which includes multiple websites.
Each website does a ajax check for with each pageview to a webpage on my server for a status update of the game.
The response page, lets say www.domain.com/response.cfm (it is coldfusion) normally returns nothing, but at a certain point of time within the games timeframe, it will display a JSON string with information.
This information is then used by the script that is included on the websites.
So website A has been viewed 100 times (all of its pages), which will generate 100 ajax calls.
The problem I have is that a robot could check the ajax destination too, and much faster. Now I can detect a robot, or could make it difficult for him by using a session or checking for cookies, BUT...
the biggest issue is that I found out you can do a lot in the Firebug script console, or the Safari console. Probably Chrome too.
With this console, they can even evade the crossdomain restriction. I created a simple script that does a couple of calls to the Ajax page and when I go to the same domain first, and then use the console...there is no crossdomain limitation. And you execute all kind of javascript, so in essence someone like me could commit fraud in the game by using the javascript console which masks him as regular browser user.
My question now is: Does anyone know how to prevent this? I tried to disable the usage of the console but I don't think I can. It may be possible to detect if the console is active and then disable MY scripts so the game doesn't work. But I think they can load the script source in the console manually and then the game does work.
Looks like console is a beautiful thing, but a nightmare for me now to prevent people cheating in the game I am creating.
Hope anyone has suggestions.
ps: of course I am trying to implement som server side checks to detect cheating, but most of the time it is not realtime.
UPDATE 19/3/2012
The fraud that I am trying to prevent is cheating in the game by polling the page that generates logic for the next step of the game. This is a serverscript page which generates json code which will trigger a change on the website the game is played on. For your information, websites the are involved have a script in there header, like google analytics, so they will communicate with my server every pageview.
Polling that serverpage can reveal information which will gain the cheaters knowledge or progress.
So i have to prevent people from getting knowledge ahead of other earnest players by monitoring the serverpage which will reveal information at a certain time. I don't want them auto polling it and when info is revealed, the send themselves a notifcation and check the website.
So what I will do is make sure that if people have to many pageviews per second, they are blocked. Plus you need a cookie to be able to join in and you only get a cookie by logging in. Hopefully this will give me enough tools to make it as robust as possible.
Thanks for all your knowledge, people.
It would be very, very difficult to disable web consoles across the majority of browsers, and anyone who managed to do this would probably be exploiting a browser bug. But read on...
First rule of web programming: You can never trust anything you receive from the web client. Anything that gets sent to your data might have been forged or altered intentionally or unintentionally, and even if you did manage to block a web console, what's to stop me from opening it in a different browser which specifically disallows websites with the console? So that's out. As #DCoder mentions in the comments, there are other methods as well, including browser extensions, which would allow user-defined JavaScript to be executed.
So any checking you do has to be server side. I know you're trying to do some checking already, and it's hard to give advice without having more specifics. That said, one way to do this, as far as I can see right now, is to issue each client an ID and store that in a database somewhere. They can't be sequential IDs, and make sure that they're not trivially forgeable even if someone has a bunch of different IDs (for example, you might want to salt the username, and then hash it). Each time a request is made to the server, only issue a response if the last request was >500 ms ago, and update the database accordingly. Expire the ID after logoff or some time.
The first thing you should think about is securing your server, not the client. It's impossible to hide client code from the client. While it might arguably help prevent a few people who want to cheat from cheating, it's not your primary objective. You have to do this from the server side. This means validating the requests on the server to ensure that they conform to your expectations to some degree.
Game companies will
Require user authentication of some kind so they can identify users
Create some rules about possibilities. For example, the laws of physics should apply, so you know when someone has cheated. Something they can validate as human activity.
Ban people who cheat
If you are not sending data continuously over the network, then you have an issue which is unsolvable unless you are willing to make checks on the server securely and continuously over the course of the game. This will increase server load, but that's the unfortunate cost of preventing cheats.

Facebook signed_request data and some security concearns

We've just developed a small Facebook puzzle that people win some gifts from our customer. I'd like to ask a few questions since I'm pretty stuck despite tried lots of things. First I'd like to write what we have and then will explain our problems.
What we did so far:
Root of application (/) checks for signed_request in POST params, extracts information from it to see if we've registered the logged in user into our database. This checks are also used to understand if the request is sent from Facebook or not to prevent requests coming outside of Facebook. (will write why we want this)
Once the application is successfully rendered, Facebook JS API takes place, does its checks and sets the fbsr cookie. We use that cookie information while processing ajax requests to check if the request really belongs to the logged in user (e.g.: scores being sent for a user belong to the logged in user).
We implemented CSRF protection and another protection to check if the requests are POST and more specifically AJAX requests and return 40x if not.
Problems:
Despite I do some checks to prevent spoofed scores, I couldn't think of a way that the logged in user could improve its own scores by simply calling the same JS code I do for real scores. We just ignored this for some time until we just found out that some people seem to take advantage of this bug.
One way I thought of is to ignore all requests except coming from Facebook. Since the ajax requests are blocked (cross site) we should have been safe. However this leaded to another problem that, once we redirect users to e.g. leaderboard the signed_request data is lost and our index page returns 40x once the user tries to go back since our application thinks that the user tries to visit our application outside of Facebook.
I hope that I made our problem clear. Gaming time is calculated by Flash (game is programmed in AS3) and it's sent via JavaScript methods to server side. We could have done it in Flash but that only prevents our problem from becoming trivial. Afterall we'd have the same problem if we had implemented the game in HTML5.
Any thoughts, suggestions are really welcome and thanks for your feedback!
This is a bug by design. You are calculating the scores on client side and then send them to the server. The server has no way to validate if the score is correct. This can ALWAYS be faked by clever users.
Never ever ever calculate things that could give users advantage on clientside. Clientside is evil. Everything on clientside can be manipulated - no matter how hard you try.
Calculate your scores on the server and use the client side only to display them. Every other solution is crackable.

How do end users (hackers) change Jquery and HTML values?

I've been looking for better ways to secure my site. Many forums and Q/A sites say jquery variables and HTML attributes may be changed by the end user. How do they do this? If they can alter data and elements on a site, can they insert scripts as well?
For instance I have 2 jquery scripts for a home page. The fist is a "member only" script and the second is a "visitor only" script. Can the end user log into my site, copy the "member only" script, log off, and inject the script so it'll run as a visitor?
Yes, it is safe to assume that nothing on the client side is safe. Using tools like Firebug for Firefox or Developer Tools for Chrome, end users are able to manipulate (add, alter, delete):
Your HTML
Your CSS
Your JS
Your HTTP headers (data packets sent to your server)
Cookies
To answer your question directly: if you are solely relying on JavaScript (and most likely cookies) to track user session state and deliver different content to members and guests, then I can say with absolute certainty that other people will circumvent your security, and it would be trivial to do so.
Designing secure applications is not easy, a constant battle, and takes years to fully master. Hacking applications is very easy, fun for the whole family, and can be learned on YouTube in 20 minutes.
Having said all that, hopefully the content you are containing in the JS is not "mission-critical" or "sensitive-data". If it is, I would seriously weigh the costs of hiring a third party developer who is well versed in security to come in and help you out. Because, like I said earlier, creating a truly secure site is not something easily done.
Short Answer: Yes.
Anything on the users computer can be viewed and changed by the user, and any user can write their own scripts to execute on the page.
For example, you will up vote this post automatically if you paste this in your address bar and hit enter from this page:
javascript: $('#answer-7061924 a.vote-up-off').click();
It's not really hacking because you are the end user running the script yourself, only doing actions the end user can normally do. If you allow the end user on your site to perform actions that affect your server in a way they shouldn't be able to, then you have a problem. For example, if I had a way to make that Javascript execute automatically instead of you having to run it yourself from your address bar. Everyone who came to this page would automatically upvote this answer which would be (obviously) undesired behavior.
Firebug and Greasemonkey can be used to replace any javascript: the nature of the Browser as a client is such that the user can basically have it do anything they want. Your specific scenario is definitely possible.
well, if your scripts are public and not protected by a server side than the Hacker can run it in a browser like mozilla.
you should always keep your protected content in a server side scripting and allow access by the session (or some other server side method)
Yes a user can edit scripts however all scripts are compiled on the user's machine meaning that anything they alter will only affect their machine and not any of your other visitors.
However, if you have paid content which you feed using a "members-only" script then it's safest if you use technology on the server to distribute your members-only content rather than rely on the client scripts to secure your content.
Most security problems occur when the client is allowed to interact with the server and modify data on the server.
Here's a good bit on information you can read about XSS: http://en.wikipedia.org/wiki/Cross-site_scripting
To put it very simply:
The web page is just an interface for clients to use your server. It can be altered in all possible ways and anyone can send any kind of data to your server.
For first, you have to check that the user sending that data to your server has privileges to do so. Usually done by checking against server session.
Then you have to check at your server end that you are only taking the data you want, and nothing more or less and that the data is valid by validating it on your server.
For example if there is a mandatory field in some form that user has to fill out, you have to check that the data is actually sent to server because user may just delete the field from the form and send it without.
Other example is that if you are trying to dynamically add data from the form to database, user may just add new field, like "admin", and set it to 1 and send the form. If you then have admin field in database, the user is set as an admin.
The one of the most important things is to remember avoid SQL injection.
There are many tools to use. They are made for web developers to test if their site is safe. Hackbar is one for example.

Categories

Resources