Is there any way to skip the Google Captcha in Protractor? - javascript

I am trying to automate the Registration screen of my Angular app but I am not able to view the Submit button due to Google Captcha. Is there any method to skip the Google Captcha and proceed to click on the Submit button in Protractor? I tried using switching the frames but it didn't work out. Please help

CAPTCHAs (or Completely Automated Public Turing tests to tell Computers and Humans Apart) are designed to be a gate that lets humans through and robots (programs) out.
For reCAPTCHA v2 the story is a little different. You’re still engaging in the same 2-step process as above but you’re sending different data. In this case you need to send the reCAPTCHA sitekey which can be found on the containing , regardless of whether or not the iframe has loaded.
The response you get is a token that needs to be submitted alongside the form and needs to be entered into a hidden text field with the ID of g-recaptcha-response
TL;DR, you can't really bypass CAPTCHAs (that is basically the idea of them)

Related

Zapier webhook url alternative

We are using zapier's webhooks to collect answers we gather via html5 formatted mail messages. The mail is created with zapier and includes some dynamic info and two buttons to make your choice. These button actions are each linked to a different webhook. The webhook provided by zapier has a static part and a dynamic part which is a zapier variable. When the receiver pushes the button, a blank browser window shows up with, whether a short text or in "silent mode" a blank page.
Although we are sending an extra mail as confirmation for the decision made via mail, we would like to either avoid the showing of this blank page at all or maybe replace it by a customized HTML page that shows a more attractive web page.
We tried different approaches like using an additional javascript onlick action on the button to achieve the opening of two pages with one click. One with the webhook blank page as the trigger and another one with a nice confirmation message page for a nice user experience. Unfortunately, there are some mail client limitations that seems not to allow javascript executions.
Is there any workaround like using any third-party service that offers webhook containers with the possibility to customize the content of the page linked to the webhook URL or any idea on how to link a button action to two url’s?
Thank you for reading this long question. Any help would be greatly appreciated.
David here, from the Zapier Platform team.
I can't speak to other options, but the best choice for you here is probably to have the email link go to a static page with two buttons. That way, you'll be in the browser and can do whatever you need with JS. This is likely buttons that have onClick handlers and then show nice confirmations.
It also avoids false clicks, where a mail client tries to load external resources (which normally doesn't matter, but in this case can trigger a webhook).

How Google's reCAPTCHA v3 works

Google has rolled out reCAPTCHA v3. It does away with all the user friction. I wish to use it to secure my site. However, I am unsure about how this is going to protect my site. What if a hacker spams the URLs on my site with an external tool without using the interface I provide? How is reCAPTCHA v3 going to stop that?
How is reCAPTCHA v3 going to stop [Spam] ?
There are various heuristics which can be used to detect automated systems, such as the number of requests coming from a certain IP, browser fingerprinting, Google account cookies, among many others. Google seems to use some of them. If uncertain, a challenge gets shown.
What if a hacker spams the URLs on my site with an external tool without using the interface I provide?
Google generates a token for the client when they pass the checks which you have to validate on the serverside. If someone doesn't pass the CAPTCHA (a robot), they do not have a token.
In addition to the user behavior tracking on your site (as explained by Jonas Wilms), the v3 (and v2) also makes decisions based on your IP, ASN, browser and any kind of information about your system based on the information sent via your HTTP request.
The only difference is that V2 is a complete solution i.e, if it thinks a user may be a bot, it will pose additional challenges until it is convinced the user is a human. On the other hand, V3 is non-intrusive. It generates a score based on the parameters discussed above and passes it onto you. It is then your decision to take appropriate steps (like post challenges, or have two-factor authentication, etc.) based on this score.
IMO, it is better to start with a V2 solution and implement V3 if you want more control or have a better way to challenge the user if they have a low score.
(Here is an interesting article on the differences)
In few simple words google tracks your whole cursor and keyboard movement from moving mouse to select form fields to pressing tab to change fields.
To verify reCAPTCHA is working or not --> Submit a form and then
click refresh; it would ask for re-submission. Click continue. But as
this is a way much similar to Robot activity of submitting a form without any cursor of keyboard movements, reCAPTCHA will prevent
form submission or any other stuff from happenning.

CasperJS: Amazon infinite Captcha Login

I am using Casperjs to Login in my Amazon Account and retrieve some data.
But once in a while I get Captchas on the login. So casperjs display to me the captcha and I manually return the solution so it can submit the form.
The problem is that CasperJS gets immediately another captcha, this time it's more difficult. I resolve this too, but another captcha appears... and so on indefinitely...
I don't do anything special, just some casperjs fill and click.
Casperjs loads in the page an external js file with the captcha solution, and then submit.
I am sure that the right captcha is submited.
How can Amazon be so sure to trap me in an infinite loop?
Consider how it looks from their point of view. They can tell a robot is accessing your account based on mouse and keyboard interactions. A human will scan the page and move their mouse randomly while searching for the login buttons. Your script jumps directly to clicking the selector.
When a captcha appears, you fill it in. This does not prove you are a human. This simply proves that your robot can alert you to a captcha for a human to fill in. The rest of the interactions are all done by a robot, and Amazon is fully aware of this. You can answer as many captchas as you like, but the interactions to get this far are still going to be flagged as a robot.
You may want to go down a different route, like having a cookie to start a CasperJS session with your account already logged in. Alternatively, does Amazon provide any sort of API to pull out the value you're interested in?
They're blocking your robot out of geniune love and concern, if that makes you feel any better!
Unfortunately this is not an exact science, so probably there is no such thing as a general, durable solution. Amazon.com uses different techniques to check if you are a robot, including browser fingerprinting, cookie challenges and user behavior profiling (mouse movements and so on).
I would try first to randomize some part of the user agent, only to see if that works. And I would also try a full headless browser like Chromium, using Selenium to allow the script to talk with it.
Can I ask how frequently are you trying to crawl your account? I think it shouldn't be a big deal if you are doing that one a day or so.

How to prevent javascript from changing input value

I have an array of hidden input boxes that carry especially sensitive data, and the form is submitted to a third party application on click of a button.
The values of these inputs are set server-side. The page that has these inputs is a confirmation page, and the user clicks the button to confirm the transaction and the data in the hidden input boxes is posted.
This is inherently very insecure, as anyone with half decent knowledge of javascript could load devtools and use javascript to change the values of the hidden inputs before submitting the data. The page even conveniently has jQuery loaded! Ha! (I tested this myself).
This is running on a private application with a limited user set and hasn't been a problem so far, but the same architecture is now required on a more public space, and the security implications of shipping this would be a little scary.
The solution would be to post the data server-side, but server-side posting does not work (at least not in a straightforward way) because of how the third party application is set up. The alternative would be to somehow prevent javascript (and of course by extension jQuery) from changing the values in the input boxes.
I was thinking of implementing (using setInterval) a loop that basically checked if the input values were the same as the original, and if not, changed it back, effectively preventing the values from being changed.
Would my proposed method be easily beatable? Perhaps there is a more elegant and simple way to stop javascript from editing those specific input values?
** EDITS
For anyone coming here along this path:
After multiple considerations, and an inability to sign my data with keys from the third party application, I resorted to manually posting the data server-side from my application (a ruby on rails app).
It may take some fiddling to get the right payment page to display after the posting happens, and I haven't tested it yet, but in theory this will be the way to make sure everything is submitted server side and the user never gets a chance to tamper with it.
For Ruby on Rails apps, there are some good insights at this question.
This answer also shows how to use the hacky autosubmitting form that I mentioned in the comments, but this may be prone to the same vulnerabilities as #dotnetom replied. (See comments)
Thanks again to everyone who contributed.
You solution based on the setInterval and other javascript functions will not work. Person with a dev tools can easily disable it from the console. If there is no way to send these parameters from the server, the only option I see is to generate signature with some public key from all the parameters need to be sent. The third party application can validate this signature and check that parameters are genuine.
But again, it is not possible if you have no control over third party application..
See an example from twitter: https://dev.twitter.com/oauth/overview/creating-signatures
If someone wanted to change the value of those input fields, they could just disable JS (and in this way get around your checking algorithm) and update the input values inside the HTML.
This can quite easily be done with FireBug, for example. No JS needed.
If sensitive data is involved, there will probably be no way to get around server-side posting or at least server-side validation.
I was thinking of implementing (using setInterval) a loop that
basically checked if the input values were the same as the original,
and if not, changed it back, effectively preventing the values from
being changed.
Attacker can easily overcome this by
overriding the method which is doing this periodic checking. Check this solution
Setting up a browser extension which can change values after setInterval() has changed it
Disable JS.
Basically client-side validation is just to ensure that server-side call can be avoided to reduce network trips, it cannot be a final frontier to protect the integrity of your data. It can only be done on server-side, which is an environment user cannot manipulate.
I know this is an older post, but it piqued my interest because of related (but not the same) issues with javascript security.
Just thinking about the logic of a solution for the OP, and assuming I understood correctly...
The server sets the vals, the user confirms, and the confirm goes to a 3rd party. The problem is that the vals could be edited before post to the 3rd party.
In which case, a possible workable solution would be;
Store the vals on the originating server with a unique ID
Send the confirmation back to the originating server for validation
Originating server forwards to the 3rd party if validation = true
In the event the 3rd party needs to send data back to the user, and it is not possible to let the server act as a go between, (which really it should) then you are a bit compromised.
You can still send data back to originating server with an AJAX type true fale response to the user.
Obviously, a malicious user could intercept the AJAX response using javascript edits but, (assuming the 3rd party app is looking for some kind of user ID), you would flag that ID as invalid and alert the 3rd party app before the AJAX response is delivered to the user.
otoh, hidden input boxes asside, the bigger consideration should be manipulation of the client side javascript itself.
One should have a validation wrapper for any sensitive functions or variables, to ensure those have not been modified.

How to block non-browser clients from submitting a request?

I want to block non-browser clients from accessing certain pages / successfully making a request.
The website content is served to authenticated users. What happens is that our user gives his credentials to our website to 3rd party - it can be another website or a mobile application - that performs requests on his behalf.
Say there is a form that the user fills out and sends a message. Can I protect this form so that the server processing the submission can tell whether the user has submitted it directly from the browser or not?
I don't want to use CAPTCHA for usability reasons. Can I do it with some javascript?
You can raise the bar using javascript, but anything a browser does, an automated system can do. At the very worst, they could automate a browser, but there will almost certainly be some easier way to simulate the operation.
In any case they can record the requests that the browser sends using a proxy, and work out whatever tricks you have the javascript do.
In terms of what springs to mind (to raise the bar) (using javascript):
Change the location that the submit goes to.
Change field names around at submit time.
Hide fields that look like should be filled in.
Encrypt/obfuscate form contents at submit time.
Change GET to POST.
Another usability problem is that anybody who has javascript disabled won't be able to use the service at all. That might impact usability more than a CAPTCHA.
There is no reliable way to detect the HTTP agent - you will break the form for some browsers in any case - unless you can force users in to using a very limited set of browsers (but this can be spoofed again).
IMO, trying to limit the software that can be used to access the form, you should make sure that there is a real human controlling that software. Unfortunately there is no better way than captchas for doing this, unless all customer have access to biometric scanners.
There is only one way to do this, analyzing vendor string looking for browsers admitted, but if someone fakes the vendor string theres no way to keep away from submissions.
To know if a navigator is mozilla based with javascript :
var isMoz = window.navigator.userAgent.match(/^Mozilla/)?true:false;
with php you could try native function get_browser

Categories

Resources