How can I detect a successful login in my firefox extension? - javascript

I am trying to create a firefox extension to detect if someone successfully logs into a site, but am having a little difficulty determining an algorithm to do this.
My thoughts right now are to use javascript to accomplish this and to firstly check to make sure the user is on a page with a login, for all intensive purposes a password field. Then after a login attempt has occurred, I would check to see if it is successful by looking to see if a password field is still on the page.
Another idea would be to compare the url before and after the login and see if they are different, yet within the same domain. This however, has drawbacks for sites like facebook, for which the login and the landing page are the same.
Does anyone have any other ideas on how I might make this detection?
Thanks in advance!

You pretty much can't make a generalized detection algorithm that would work on each page. As you yourself mentioned, different pages have different schemes.
Even in the scheme where the login and landing page are different, how would you know whether the post-login page displays an error or notifies somebody of a successful login?
The first method you mention actually has some merit and might really work for most of the sites, but again there is a good chance you will run into problems when we talk about logging in using FB authentication or Google Accounts authentication, as there are multiple redirects, and also a password input may never appear (in case I am already logged in to Google Accounts, then jut choosing Google as my ID provider logs me in to StackOverflow).
If you could consider the above mentioned case and come up with a workaround (checking redirects for specific providers.. there are not many of them, so you could cover most of the cases), then yes, your first solution combined with this could provide a workable method.

Related

Problems with Disqus OAuth2 flow

Since posting on the Disqus disqus forum seems to be a waste of time, maybe someone here can help.
I'm trying to use the OAuth2 flow to connect a Disqus user to my app's account system so I can monitor their activity (posts/comments/etc). I'm using Meteor for my app. I'm calling the OAuth2 authorize endpoint from my server code and passing the resulting HTML back to the client for rendering. This all works fine. But I'm seeing 2 problems on the client side. First, the HTML code returned from Disqus seems to be designed in a full page and the username/password fields extend across the entire window. I was expecting a dialog/modal popup like the one that Disqus provides when logging into a forum. I tried wrapping the HTML inside of a Boostrap3 modal window which mostly works except the username and password fields extend off the right side of the dialog box.
Ignoring the ugly UI the second problem is that when the user clicks on the submit link Disqus puts up and error page titled 'CSRF verification failed (403) - DISQUS'. I'm guessing this may be because the OAuth2 call was made from the server and the submit is coming from the client. If I copy the OAUTH2 url directly into the browser everything works fine. But I don't want to expose my API key and resulting code on the client side since that seems like a security risk.
All I really want to do is verify that the user is trying to connect their own account to my app (and not some other user). I'm not posting with their account so I don't need an access token (I'm calling user/details which just takes the API-key). So I've thought about creating a forum for my app and using the login endpoint to verify the username/password combo. But that dialog doesn't explain the scopes I'm asking for.
I've also considered building my own dialog box to prompt for the username/password, sending those back to the server and have the server "fake" the submit back to Disqus. But that is not a maintainable solution since Disqus might change the expected fields at any time. And it is ugly as sin.
Anyone have any suggestions? I didn't post any code since I don't believe it is a coding problem (and the code is a bit convoluted). But if anyone thinks it will help you help me, I'll be happy to post it. And, yes, I'm aware that not posting the code violates StackOverflow conventions. But I'm taking a chance that the powers that be will allow this post since Disqus support is non-existant and I don't know where else to reach out.
The basic problem was that I was using 'request' with forwarding enabled so that instead of getting the Disqus URL I was getting the Disqus authentication text. You need to render the authentication URL in a window, not the contents. That fixes the CSRF problem.
The next problem is that the URL returned by getAuthorizationUrl is bad. It is of the form 'nullhttps:...'. No idea where the 'null' is coming from, but stripping it off fixes that problem.
To make things easier for anyone looking to do this, there is a shiny new version of the Disqus NPM that includes OAuth authentication methods at https://www.npmjs.com/package/disqus.

How to stop users to manipulate the popup and at the same time let googlebot crawl my page

I have a very confusing problem.
I have a page which only allow paid users to view it. So if the user is not valid I use a pop up with grey backgroud to block users to view the page however there is a potential flaw with this and if a user is clever he can find a workaround and by using the inspect element bypass the popup. Another solution which comes to my mind is to redirect the user to another page instead of pop up like:
window.location = "http://www.example.com";
However there is a potential problem with this or may be I am wrong on this:
I think this way google bots wont be able to crawl that page since redirection happens however in the first approach google will definitely be able to crawl the page.
Now my question is if I use the first approach is there anyway to stop user from manipulating the popup or is there anyway I can distinguish if a user is browsing the page or google?
Also if I use the second approach will google bot be able to crawl the page?
You can't implement a paid block or any types of truly secure/working blocking on the frontend. I would suggest prevent accessing to that said page on the backend.
There's no real clean and 100% working way to this on the frontend. The user can always bypass.
For google, it will be able to crawl the page since the content is still accessible via the rendered html, as it does not care how the page is shown. It gets access to the content anyway, just like you would by fetching the html via a get request without a browser.
You could indeed just redirect, but still do it on the backend not the frontend.
Your current solution does not make the page private - as you rightly point anyone can manipulate the page using the dev tools, and crawlers can read the whole source anyway. Using server-side scripts to block access, and/or vary the content based on an authorisation token is the only way to secure it properly and ensure that only your legitimate paying users get privileged access.
You state a concern about the inability for Google (and other search engines, I assume) to crawl the page if you employ better security. But your logic is flawed: If you make it so that a google bot can still crawl the page, then by definition it must be readable without authorisation. Anyone could view it in the google cache, and parts of its content could show up in google searches. This means it isn't private. Once that's the case, then what are your users paying for, exactly?
What you might realistically want to do is have a cut-down version of the page that is displayed when the user is not authorised, containing enough information for search engines to get an idea of the overall content, and for visitors to be tempted into paying for the rest. Then if the user logs in, the server recognises that and displays the rest of the content as well when the page refreshes. That appears to be roughly what paid-content news sites do, for instance.

CasperJS: Amazon infinite Captcha Login

I am using Casperjs to Login in my Amazon Account and retrieve some data.
But once in a while I get Captchas on the login. So casperjs display to me the captcha and I manually return the solution so it can submit the form.
The problem is that CasperJS gets immediately another captcha, this time it's more difficult. I resolve this too, but another captcha appears... and so on indefinitely...
I don't do anything special, just some casperjs fill and click.
Casperjs loads in the page an external js file with the captcha solution, and then submit.
I am sure that the right captcha is submited.
How can Amazon be so sure to trap me in an infinite loop?
Consider how it looks from their point of view. They can tell a robot is accessing your account based on mouse and keyboard interactions. A human will scan the page and move their mouse randomly while searching for the login buttons. Your script jumps directly to clicking the selector.
When a captcha appears, you fill it in. This does not prove you are a human. This simply proves that your robot can alert you to a captcha for a human to fill in. The rest of the interactions are all done by a robot, and Amazon is fully aware of this. You can answer as many captchas as you like, but the interactions to get this far are still going to be flagged as a robot.
You may want to go down a different route, like having a cookie to start a CasperJS session with your account already logged in. Alternatively, does Amazon provide any sort of API to pull out the value you're interested in?
They're blocking your robot out of geniune love and concern, if that makes you feel any better!
Unfortunately this is not an exact science, so probably there is no such thing as a general, durable solution. Amazon.com uses different techniques to check if you are a robot, including browser fingerprinting, cookie challenges and user behavior profiling (mouse movements and so on).
I would try first to randomize some part of the user agent, only to see if that works. And I would also try a full headless browser like Chromium, using Selenium to allow the script to talk with it.
Can I ask how frequently are you trying to crawl your account? I think it shouldn't be a big deal if you are doing that one a day or so.

Block current users ip using javascript

I have a website where your able to advertise things on my website. The problem is that people are able to do it more than once. Is there a way that people are allowed to visit the website and when they join back they will be redirected to another page saying you have already advertised. People are still able to use vpn's but i have a way to stop that.
How can i use javascript or php to record the users ip first when the visit the website, But if they leave the website or reload the page they will be redirected to another page saying you have already advertised. Is this to much work?
Technically yes, you could use JS and PHP to grab a user's IP address and work with it in a database but proxies and dynamic IPs would make it a very easy check to circumvent. You can also use PHP to create a persistent cookie to identify the user and his/her actions and see if you're getting a returning visitor who posted an ad, but cookies can easily be deleted.
So it's not that what you're trying to do is too much work, it's that it's fairly easily circumvented and not very reliable. Your best bet is an authentication system that requires a valid login to post an ad, logging what the advertisers do, and creating logic which will disallow spammy behavior based on your logs.
You won't be able to stop abuses by very, very determined users but you can make it harder and make them think twice about whether it's worth investing all that time and effort into spamming on your site when there are bound to be much softer targets, giving you the time to deal with the most egregious cases personally instead of trying to stop a torrent of spammy ads.
You cannot stop people doing that 100% for sure.
if you block their IPs they use proxy.
if you use session they change their browsers or reset it to default.
if you block their hardware like in facebook block hard disk serial again they use vpn servers.
if ..
there is no way bro.
Ask for paying instead of making it for free.

Facebook Connect for one application with multiple domains?

I'm implementing a plug-in that's embeddable in different sites (a la Meebo, Wibiya), and I want to use Facebook Connect. The plug-in is supposed to be embeddable in sites with different domain names. Problem is, Facebook connect allows only one domain per application you register.
The question is, how can I have multiple domains for a single Facebook application, assuming:
When users "Allow" the application on one site, they won't have to "Allow" it on other sites as well.
Preferably, after the initial log-in, users won't see a pop-up opening on every site they log-in to (i.e. - I'd rather not open a link to my domain and do the log-in process from there).
Is there anyway of doing that?
If not, is my only option is to manage all the log-ins from a single domain and pass the cookies back to the original domains?
And if I pass the cookies between domains, how can I be sure that Facebook won't block this kind of behavior in the future?
I'd appreciate any suggestions, though I'd prefer an official solution over hacks, if at all possible.
Im assuming you are using facebook.php by Naitik Shah? Your widget would need to be on every page of course and include the async script connect-js.
I am currently developing a facebook login based application myself.
I would say the best solution is too login through your own domain and pass the cookie. Your app/widget will be the only one they allow to share information with. Nothing should be different in operation from a single page solution. I envisage a PHP plugin which executes a login from an outside domain and passes through the cookie to the site via the widget. return the cookie securely how you wish (except for something dodgy like storing it in a div and retrieving it..or something a hacker could try to spoof). the site will then use the cookie for account and user id purposes and the widget will control all login actions and session finding using the async script (but routed through a different domain).
Sorry I can't be more help but this is the only solution I can think of, and it seems you have already anyway.
In terms of keeping session control across different domains you only need the 3rd party cookie to be active. Once your page is activated for your domain you will already have the cookie for that domain if you haven't logged out or it hasn't expired. A benefit of using an outside management domain.
It would seem this is also the most reliable way compared to any successful hack for multiple domains, because I would see fb and Oauth2.0 as being ok with an approved party sharing info (cookies) to another party approved by the approved party. But.. It could be problematic if they think the user will have privacy issues, because you could potentially share the cookie on any site without the users permission. So you have to be careful about notifying the user about all the sites they will be auto logged into and treating them with respect.
Good luck with it, hope you let us know how it goes.
There is easy and clean technique -> Single Sign On (SSO). You can search on about it.

Categories

Resources