Forcing an HTTP request to fail in browser - javascript

Is it possible to make an http request that has been sent to a server by the browser fail without having to alter the javascript?
I have a POST request that my website is sending to the server and we are trying to test how our code reacts when the request fails (e.g. an HTTP 500 response). Unfortunately, the environment that I need to test it in has uglified and compressed javascript, so inserting a breakpoint or altering the javascript isn't an option. Is there a way for us to utilize any browser to simulate a failed request?
The request takes a long time to complete, so using the browser's console to run a javascript command is a possibility.
I have tried using window.stop(), however, this does not work since I need to failure code to execute.
I am aware of the option of setting up a proxy server, but would like to avoid this is possible.

In Chrome (just checked v63), you can actually block a specific URL (or even a whole domain) from the Network tab. You only need to right-click on the entry and select Block request URL (or Block request domain.)

One possible solution is to modify the XMLHttpRequest objects that will be used by the browser. Running this code in a javascript console will cause all future AJAX calls on the page to be redirected to a different URL (which will probably give a 404 error):
XMLHttpRequest.prototype._old_open =
XMLHttpRequest.prototype._old_open || XMLHttpRequest.prototype.open;
XMLHttpRequest.prototype.open = function(method, url, async, user, pass) {
return XMLHttpRequest.prototype._old_open.call(
this, method, 'TEST-'+url, async, user, pass);
};

Don't overlook the simplest solution: disconnect your computer from the Internet, and then trigger the AJAX call.

Chrome's dev tools have an option to "beautify" (i.e. re-indent) minified JavaScript (press the "{}" button at the bottom left). This can be combined with the "XHR breakpoint" option to break when the request is made. XHR breakpoints don't support modifying the response though AFAIK, but you should be able to find a way to do it via code.

To block a specific URL and make an API call failure, you just need to follow below steps:
Go to Network tab in your browser.
Find that API call which needs to fail(as per your requirement).
Right click on that API call and
Click on 'Block Request URL', you can unblock as well in same manner as the option will turn into 'Unblock'

Just type at the brower a changed URL, e.g. the well formed URL e.g. http://thedomain.com/welcome/ by another placing "XX": http://thedomain.com/welcomeXX/ , that will cause a 404 error (not found)

Related

Browser locks during server side expensive process from async request

So I have a quite expensive and complex PHP process which makes its execution long lasting, lets call it function "expensive_process()".
I have an interface which through a press of a button calls an ajax request to a PHP script which in turn initiates "expensive_process()". Here's the javascript code:
$('#run_expensive_process_button').click( function(){
var url = "initiate_expensive_process.php";
$.ajax({
url: url
});
});
And initiate_expensive_process.php code:
<?php
session_start();
run_expensive_process();
?>
Simple and trivial. Now the issue with this is that while expensive_process() is running, the browser is losing the ability to navigate the domain. If I refresh the browser window it hangs indefinitely while the process last. If I redirect to a different url under the same domain, same thing. This happens in all browsers. However, if I relaunch the browser (close and open a new window, not a tab), navigation works normally, even though expensive_process() is still running.
I've inspected network traffic, and the HTTP request to initiate_expensive_process.php doesn't get a response while expensive_process() is running, but I'm assuming this shouldn't be locking the browser given the asynchronous nature of the request..
One more thing, which I believe is relevant. This situation is happening on a replica server. On my local machine, where I run WAMP and the same source code, this is not happening, i.e., while expensive_process() is running, I'm still able to navigate the hosting domain without having to relaunch the browser. This seems to be an indication of a server configuration problem of some sort, but I'm not sure I can rule out other possible reasons.
Anyone know what might be causing this or what can be done to figure out the source of the problem?
Thanks
Most likely the other PHP scripts also session variables. Only one script process can access a session at a time; if a second script tries to access the session while the first script is still running, it will be blocked until the first script finishes.
The first script can unlock the session by calling session_write_close() when it's done using the session. See If call PHP page via ajax that takes a while to run/return (and it sets session variables), will a 2nd ajax call see those session changes? for more details about how you can construct the script.
I wonder whether it might be due to ajax. The javascript is being executed client-side.
Maybe you might consider a stringified JSON call instead of ajax?

HTML form seems to be submitting *both* POST and GET?

This is not a duplicate of questions such as this, but rather the opposite: I have a form that I'm submitting via jQuery
$('<form>', {
action : 'service',
method : 'post',
target : '_blank'
}).append(
$('<input>', {
type : 'hidden',
name : 'payload',
value : JSON.stringify(payload)
})
).appendTo('body').submit().remove();
This is done so that I can open a different page with HTML.
Since I need to submit quite a lot of complex information, what I actually do is serialize them all into a big JSON string, then create a form with only one field ("payload") and submit that.
The receiving end has a filter that goes like this:
if the method is POST,
and there is only one submitted variable,
and the name of that one variable is "payload",
then JSON-decode its value and use it to create fake GET data.
So when the GET data grows too much I can switch methods without modifying the actual script, which notices no changes at all.
It always worked until today.
What should happen
The server should receive a single POST submission, and open the appropriate response in a popup window.
What actually happens instead
The server does receive the correct POST submission...
...apparently ignores it...
...and immediately after that, the browser issues a GET with no parameters, and it is the result of that parameterless GET that gets (pardon the pun) displayed in the popup window.
Quite unsurprisingly, this is always a "You did not submit any parameters" error. Duh.
What I already did
verified that this method works, and has always worked for the last couple of years with different forms and different service endpoints
tried replacing the form with a hardcoded <FORM> in HTML, without any jQuery whatsoever. Same results. So, this is not a jQuery problem.
tried with different browsers (it would not have helped if it only worked on some browsers: I need to support most modern browsers. However, I checked. Luckily, this failure reproduces in all of them, even on iPhones).
tried sending few data (just "{ test: 0 }").
tried halting the endpoint script as soon as it receives anything.
checked Stack Overflow. I found what seems to be the same problem, in various flavours, but it's of little comfort. This one has an interesting gotcha but no, it does not help.
checked firewalls, proxies, adblockers and plugins (I'm now using plain vanilla Firefox).
called the IT guys and asked pointed questions about recent SVN commits. There were none.
What I did not yet do
Check the HTTPS conversation at low level (I don't have sufficient access).
Compared the configuration, step by step, of a server where this works and the new server where it does not.
Quite clearly, put my thinking hat on. There must be something obvious that I'm missing and I'm setting myself up for a sizeable facepalm.
Use a tool like hurl.it or Postman to manually send a request to the server. The tools will nicely display the response from the server including all HTTP headers. I suspect the server responds with a redirect (Status code 30X) which leads to a GET request being issued after the POST completes.
Update: HTTP redirects
HTTP redirects do not necessarily use the same HTTP method or even the same data to issue a request to the redirect target. Especially for non-idempotent requests this could be a security issue (you don't generally want your form submission to be automatically re-submitted to another address). However, HTTP gives you both options:
[...] For this reason, HTTP/1.1 (RFC 2616) added the new status codes 303 and 307 [...], with 303 mandating the change of request type to GET, and 307 preserving the request type as originally sent. Despite the greater clarity provided by this disambiguation, the 302 code is still employed in web frameworks to preserve compatibility with browsers that do not implement the HTTP/1.1 specification.
[from Wikipedia: HTTP 302]
Also for 301s:
If the 301 status code is received in response to a request of any type other than GET or HEAD, the client must ask the user before redirecting.
[from Wikipedia: HTTP 301]

Retrieve GET requests with Javascript in Firefox console

Is there a way to retrieve details of GET requests of a web page using JavaScript? I don't mean parameters of the current page's URL but out-going GET requests.
Example:
If you open google's start page with firefox and toggle developer-tools, in the network tab you can see a number of GET request such as that for the logo which is something like https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png
I want to retrieve this URL on console tab using JavaScript. Is it possible to retrieve it via an object attached to the DOM (document) or BOM (window)?
The reason for my question is: I am in an test automation environment where developer-tools are not available. Only JavaScript is available and I need to check the URL of a GET request issued by the current page. I just mentioned developer-tools because it is the simplest way to reproduce the problem (and the easiest way to verify, if a solution works). But it is more about Firefox/HTTP than test automation as such.
I don't think it's possible within devtools, but you may be able to use normal JS to make a global event handler (if it's jQuery) or if using normal JS, replace the XmlHttpRequest object with a duckpunched object that logs the result, as described here.

Chrome: advanced usage of dev tools

I faced few problems while using Chrome dev tool. Just want to know whether it's possible and if yes - how. Suggest I have a really massive client side, with hundred of responses per page.
How to find endpoint which handle the response? I mean the first place in js code where the response come in.
How to find the response by it content? For instance, I want to know in which response I've got 45902309509902 value from the table.
How to find endpoint which handle the response?
On the Network tab, you can see where the request was originated, it's the column labelled "Initiator:"
That has a link that will show you the code originating the ajax call (I assume by "response" you're talking about an ajax response). From there, you should be able to find the callback that request is associated with. A lot of times, if you use a library like jQuery, you'll be shown the jQuery code doing the request rather than yours. You can still find what you need, though, by using the un-minified version of the libray, setting a breakpoint on that code (perhaps even a conditional one on, say, the URL being requested), and then when the breakpoint is hit using the call stack to find out where in your code the call actually originates.
How to find the response by it content?
This will be slightly more difficult. Again in the Network tab, you can click each ajax request and see (and search through) the text of there response under the Response sub-tab.

How is this working?

I was browsing through one site called BSEINDIA.com (http://www.bseindia.com/stockreach/stockreach.htm?scripcd=532667), i Noticed that on click of Get Quote it seems to fire an Ajax request and get the price of selected equities. I tried to segregate this request and fire it separately, but it doesn't seem to work.
I copied over the code from the HTML of same page (http://www.bseindia.com/stockreach/stockreach.htm?scripcd=532667)
Any pointers why is this not working, is there some sort of Authentication going on , i am not even a member of this site??
following is what i am trying to do
<script type="text/javascript">
var oHTTP=getHTTPObject();
var seconds = Math.random().toString(16).substring(2);
if(oHTTP)
{
oHTTP.open("GET","http://www.bseindia.com/DotNetStockReachs/DetailedStockReach.aspx?GUID="+seconds+"&scripcd=532667",true);
oHTTP.onreadystatechange=AJAXRes;
oHTTP.send(null);
}
function AJAXRes()
{
if(oHTTP.readyState==4)alert(oHTTP.responseText);
}
function getHTTPObject(){var obj;
try{obj=new ActiveXObject("Msxml2.XMLHTTP");}
catch(e){try{
obj=new ActiveXObject("Microsoft.XMLHTTP");}
catch(e1){obj=null;}}
if(!obj&& typeof XMLHttpRequest!='undefined'){
try{obj=new XMLHttpRequest();}
catch(e){obj=false;}}return obj;}
</script>
Found out my Answer here
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.referer%28VS.71%29.aspx
Actually, it is fairly easy. When you send an HTTP request, an header called Referrer gets sent with the request. The Referrer is basically the URL of the page which initiated the request.
BSEINDIA checks the Referrer value to make sure that the request is coming from their site. If it is, it sends the data. If not, it sends its 404 page.
You can easily test that theory by disabling the Referrer in your browser. In Firefox, you can do that by typing about:config and setting network.http.sendRefererHeader to 0.
If you still want to get the data, you will need to write a script (in PHP or another language) which will make the request with the proper Referrer and output the results.
There might be some form of IP restriction in place for accessing the files / data needed to save themselves from third party scripts accessing their data through their own scripts. Thats what I'd do.
Possibly Http Referrer. Make sure you do not break any copyright restriction.

Categories

Resources