So I am trying to use the timeout property of XMLHttpRequest to "recover" my program when a request for data times out. Basically if it fails in retrieving the data, I want it to try again. At the moment my code looks like this (full URL removed to fit it all neatly):
function pullRequest(){
var xhr = new XMLHttpRequest()
xhr.onreadystatechange = function() {
if (this.readyState === 4) {
jsonDecode = this.responseText;
json = JSON.parse(jsonDecode);
ask = (json.result.Ask);
bid = (json.result.Bid);
}
}
xhr.open("GET","<URL>",true);
xhr.send();
}
I'm not totally following how to implement the timeout property, or if it is even going to do what I want. I did add the following two lines after xhr.openbut it threw and error:
xhr.timeout = 5000;
xhr.ontimeout = pullRequest()
Basically in my head if it times out, run the pullRequest function again. I'm sure this is probably not a good idea but I'm not experienced enough to know why. For what it's worth a snippet of the error is as follows:
...\node_modules\xmlhttprequest\lib\XMLHttpRequest.js:165
settings = {
^
RangeError: Maximum call stack size exceeded at exports.XMLHttpRequest.open
Any suggestions in how to achieve my goal, or a point to some literature that would assist me would be greatly appreciated.
Thanks!
The problem is you're calling pullRequest:
xhr.ontimeout = pullRequest()
// ------------------------^^
Since it immediately calls itself, then calls itself again, and calls itself again, etc., eventually it runs out of stack when the maximum recursion level of the environment is reached.
You don't want to call it there, you just want to assign the function reference to ontimeout so that if the timeout occurs, your function gets called:
xhr.ontimeout = pullRequest
// No () ------------------^
Related
I'm writing some code where there will be one function (doAjax) to handle all the requests for different functionalities. This is working fine when used in the normal sense (clicking buttons, etc), but I'm trying to call a few things when the page is loaded to initialise the state of the application.
Here's the Ajax function:
function doAjax(type, action, data = null){
return new Promise(function(res,rej){
xhr = new XMLHttpRequest();
xhr.open(type, '/inc/functions.php?action='+action, true);
xhr.timeout = 20000;
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
xhr.onload = function() {
if (this.status === 200) {
res(xhr);
} else {
console.log("some xhr error occured");
rej("some xhr error happened");
}
};
xhr.send(data);
});
}
And here are a couple of examples of functions that send requests to this:
/* get file structure of files */
function buildTree(){
doAjax('GET', 'get_files').then(r => {
var res = JSON.parse(r.response);
console.log(res);
/*
the json is processed here and stuff
is displayed on the front end
*/
}
}
/* get user data from database */
function populateShare(){
doAjax('GET', 'social_populate').then(r => {
var res = JSON.parse(r.response);
console.log(res);
/*
again, the json received from the database is
processed and output to the front end
*/
}
}
I should also mention these functions are also bound to click listeners, so are used in the application later as well as onload!
Now, the problem is when I try and execute both of these on page load as a kind of initialise function (to set up the application ready for use). If I run the below:
function init(){
buildTree();
populateShare();
}
init(); //called later
Both console.logs output the same result - the return from social_populate (called from populateShare()).
So my question ultimately is whether there are any ways to queue function calls in the init() function, waiting for each to finish before moving onto the next? There may be more functions that need to be called on init(), some also involving Ajax requests.
I have tried the following, found in another thread - but unfortunately returns the same result:
async function initialise(){
try {
const p1 = buildTree();
const p2 = populateShare();
await Promise.all([p1, p2]);
} catch (e) {
console.log(e);
}
}
I know I could load the entire lot from the back end and return in one huge JSON, but I'm more curious to whether the above can be achieved! I feel like I've entered some kind of synchronous request death loop, or I'm missing something blatantly obvious here!
Also, no jQuery please!
Thanks!
You appeared to have accidentally used global variable when you declare xhr which was overwritten at each call.
Try let xhr = new XMLHttpRequest();
I'm currently writing a search function using JavaScript.
However, when I attempt to test my creation, I find that it stops about halfway through for no discernible reason.
Below is my code:
document.getElementById("test").innerHTML = "";
var Connect = new XMLHttpRequest();
Connect.open("GET", "xmlTest.xml", false);
document.getElementById("test").innerHTML = "1";
Connect.send(null);
document.getElementById("test").innerHTML = "2";
var docX = Connect.responseXML;
var linjer = docX.getElementsByTagName("linjer");
The first line is there to clear a potential error message from earlier in the code. Then I attempt to open up an XML file, as I need to read from it.
As you can see, I've entered two debug statements there; they will print 1 or 2 depending on how far I get in the code.
Using this, I've found that it stops exactly on the Connect.send(null); statement (as 1 gets printed, but 2 never does), but I can't figure out why. Google says that it might be that chrome can't access local files, but when I found a way to allow Chrome to do this, it still did not work.
What am I doing wrong?
This might be a synchronous issue that requires a response that your code simply is not getting.
Try using an async call instead:
Connect.open("GET", "xmlTest.xml", true);
Also make sure to setup proper callbacks since you'll be using async here now instead of synchronous code, like so:
// Global variable scope
var docX;
var linjer;
// Define your get function
getDoc = function(url, cbFunc) {
var Connect = new XMLHttpRequest();
// Perform actions after request is sent
// You'll insert your callback here
Connect.onreadystatechange = function() {
// 4 means request finished and response is ready
if ( Connect.readyState == 4 ) {
// Here is where you do the callback
cbFunc(Connect.responseXML);
}
};
// 'true' param means async, it is also the default
Connect.open('GET', url, true);
Connect.send();
}
// Define your callback function
callbackFunction = function(responseXML) {
// XML file can now be stored in the global variable
window.docX = responseXML;
window.linjer = window.docX.getElementsByTagName("linjer");
}
// And here is the call you make to do this
getDoc("xmlTest.xml", callbackFunction);
For better understanding of all of this, do some research on scope, closures, callbacks, and async.
In my javascript application I have big memory leak when making AJAX call to retrieve JSON object. Code is really simple:
function getNewMessage()
{
new_message = []; // this is global variable
var input_for_ball = [];
var sum;
var i;
var http = new XMLHttpRequest();
http.open("GET", url + "/random_ball.json", false);
http.onreadystatechange = function()
{
if(http.readyState === 4 && http.status === 200)
{
var responseTxt = http.responseText;
input_for_ball = JSON.parse('[' + responseTxt + ']');
}
}
http.send(null);
new_message = input_for_ball;
}
This is called every 1 milisecond and as you see, its synchronous call. This function costs me 1MB every 1 second.
When I use instead of AJAX just assigning to variable like:
input_for_ball = JSON.parse('[0,0,0,0,0,0,0,0,0,0]');
then its everything perfect. So error must be in my implementation of AJAX call. This happened when I use jQuery AJAX call too.
UPDATE 12/03/2013
As Tom van der Woerdt mentioned below, this really was intended behavior. So as Matt B. suggested, I have rewrote some code to make asynchronous calls possible and it helped a lot. Now my application memory consuming is stable and small.
I don't think it's the AJAX call, but the closure which is costing you memory. Your onreadystatechange function references the http object (so a reference to this will be kept with the anonymous function).
I think your code matches the pattern in example 1 in this link http://www.ibm.com/developerworks/web/library/wa-memleak/
If you've not come across closures in javascript before, they're well worth reading up on - understanding them explains a lot of behaviour which doesn't seem to make sense at first glance.
I'm trying to find a way to get if the browser is currently busy from JavaScript. I'm looking at making a Firefox extension to inject a Boolean value or something if the current page is loading something (either through ajax or just normal page loads), or the same with a Greasemonkey script, or through some JavaScript API (this would be the best solution, but from what I can see, nothing of the sort exists).
I was wondering what the best way to do this would be. I've been looking for Firefox Addon / Greasemonkey tutorials for making something like this and can't find anything. Does anyone have any tips or resources they could point me towards or better solutions for solving this?
Thanks
Edit: and by busy, I mostly just need to know if the browser is sending or receiving data from a server.
jQuery, a great javascript framework for DOM manipulation and performing ajax calls, provides two great hooks for determining when ajax calls are in progress:
$.ajaxStart() and $.ajaxStop()
Both of these hooks take a handler function that will be called when an ajax call is about to start, and when all ajax calls have ceased, respectively. These functions can be bound to any element on the page. You could set a global boolean value in your $.ajaxStart() handler to true and set it back to false in your $.ajaxStop() handler.
You could then check that boolean flag and determine whether ajax calls are in progress.
Something along these lines:
$(document).ajaxStart(function() {
window.ajaxBusy = true;
});
$(document).ajaxStop(function() {
window.ajaxBusy = false;
});
As far as determining when the browser is loading the current page, you could check
document.readyState. It returns a string of "loading" while the document is loading and a string of "complete" once it has loaded. You can bind a handler to document.onreadystatechange and set a global boolean that will indicate whether the document is still loading or not.
Something like this:
document.onreadystatechange = function() {
switch (document.readyState) {
case "loading":
window.documentLoading = true;
break;
case "complete":
window.documentLoading = false;
break;
default:
window.documentLoading = false;
}
}
EDIT:
It appears that $.ajaxStart() and $.ajaxStop() do NOT work for ajax calls invoked without jQuery. All XMLhttprequest objects have an event called readystatechange that you can attach a handler to. You could utilize this functionality to determine whether or not that individual call is done. You could push all references to outstanding calls onto an array, and in a setInterval() check that array's length. If it > 1, there are out standing ajax calls. It's a rough approach, and only one way of getting about it. There are probably other ways to do this. But here's the general approach:
// declare array to hold references to outstanding requets
window.orequets = [];
var req = XMLHttpRequest();
// open the request and send it here....
// then attach a handler to `onreadystatechange`
req.onreadystatechange = function() {
if (req.readyState != 4 || req.readyState != 3) {
// req is still in progress
orequests.push(req);
window.reqPos = orequests.length -1
} else {
window.orequests = orequests.slice(reqPos, reqPos + 1);
}
}
Do the above for each XMLHttpRequest() you will be sending, of course changing the request name for each one. Then run a setInterval() that runs every x amount of milliseconds, and checks the length property of orequests. If it is equal to zero, no requests are happening, if it is greater than zero, requests are still happening. If no requests are happening, you can either clear the interval through clearInterval() or keep it running.
Your setInterval might look something like this:
var ajaxInterval = setInterval(function() {
if (orequests.length > 0) {
// ajax calls are in progress
window.xttpBusy = true;
} else {
// ajax calls have ceased
window.xttpBusy = false;
// you could call clearInterval(ajaxInterval) here but I don't know if that's your intention
},
3000 // run every 3 seconds. (You can decide how often you want to run it)
});
Here's what I think I'll end up doing. This solution is like the one Alex suggested with the Jquery events, except that it works with anything that uses the XMLHttpRequest (Including Jquery):
var runningAjaxCount = 0;
var oldSend = XMLHttpRequest.prototype.send;
XMLHttpRequest.prototype.send = function() {
oldOnReady = this.onreadystatechange;
this.onreadystatechange = function() {
oldOnReady.call(this);
if(this.readyState == XMLHttpRequest.DONE) {
ajaxStopped();
}
}
ajaxStarted();
oldSend.apply(this, arguments);
}
function ajaxStarted() {
runningAjaxCount++;
}
function ajaxStopped() {
runningAjaxCount--;
}
function isCallingAjax() {
return runningAjaxCount > 0;
}
function isBrowserBusy() {
return document.readyState != "complete" && isCallingAjax();
}
The browser technically isn't ever "busy". Business is a very subjective term. Let's assume that the main thread is performing a simple while loop which blocks execution. This could be considered busy, but what if you have something like this:
function busy() {setTimeout(busy, 0);do_something();}
busy();
The browser isn't being blocked (per se), so whether or not the page is "busy" is very unclear. Also, that doesn't even begin to touch on web workers and code in the chrome.
You're going to be hard-pressed to do this, and even if you do, it likely won't work how you expect it to. Good luck, nonetheless.
I have an issue, mainly with IE.
I need to be able to handle n queries one after another. But If I simply call my function below in a for loop IE does some strange things (like loading only so many of the calls).
If I use an alert box it proves that the function gets all of the calls, and surprisingly IT WORKS!
My guess is that IE needs more time than other browsers, and the alert box does just that.
Here is my code:
var Ajax = function(all) {
this.xhr = new XMLHTTPREQUEST(); // Function returns xhr object/ activeX
this.uri = function(queries) { // Takes an object and formats query string
var qs = "", i = 0, len = size(queries);
for (value in queries) {
qs += value + "=" + queries[value];
if (++i <= len) { qs += "&"; }
}
return qs;
};
xhr.onreadystatechange = function() { // called when content is ready
if (this.readyState === 4) {
if (this.status === 200) {
all.success(this.responseText, all.params);
}
this.abort();
}
};
this.post = function() { // POST
xhr.open("POST", all.where, true);
xhr.setRequestHeader("Content-type","application/x-www-form-urlencoded");
xhr.send(uri(all.queries));
};
this.get = function() { // GET
xhr.open("GET", all.where + "?" + uri(all.queries), true);
xhr.send();
};
if (this instanceof Ajax) {
return this.Ajax;
} else {
return new Ajax(all);
}
};
This function works perfectly for a single request, but how can I get it to work when called so many times within a loop?
I think the problem might be related to the 2 concurrent connections limit that most web browsers implement.
It looks like the latency of your web service to respond is making your AJAX requests overlap, which in turn is exceeding the 2 concurrent connections limit.
You may want to check out these articles regarding this limitation:
The Dreaded 2 Connection Limit
The Two HTTP Connection Limit Issue
Circumventing browser connection limits for fun and profit
This limit is also suggested in the HTTP spec: section 8.14 last paragraph, which is probably the main reason why most browsers impose it.
To work around this problem, you may want to consider the option of relaunching your AJAX request ONLY after a successful response from the previous AJAX call. This will prevent the overlap from happening. Consider the following example:
function autoUpdate () {
var ajaxConnection = new Ext.data.Connection();
ajaxConnection.request({
method: 'GET',
url: '/web-service/',
success: function (response) {
// Add your logic here for a successful AJAX response.
// ...
// ...
// Relaunch the autoUpdate() function in 100ms. (Could be less or more)
setTimeout(autoUpdate, 100);
}
}
}
This example uses ExtJS, but you could very easily use just XMLHttpRequest.
Given that the limit to a single domain is 2 concurrent connections in most browsers, it doesn't confer any speed advantage launching more than 2 concurrent requests. Launch 2 requests, and dequeue and launch another each time one completes.
I'd suggest throttling your requests so you only have a few (4?) outstanding at any given time. You're probably seeing the result of multiple requests being queued and timing out before your code can handle them all. Just a gess though. We have an ajax library that has built-in throttling and queues the requests so we only have 4 outstanding at any one time and don't see any problems. We routinely q lots per page.
Your code looks like it's put together using the constructor pattern. Are you invoking it with the new operator like var foo = new Ajax(...) in your calling code? Or are you just calling it directly like var foo = Ajax(...) ?
If the latter, you're likely overwriting state on your later calls. It looks like it's designed to be called to create an object, on which the get/post methods are called. This could be your problem if you're "calling it within a loop" as you say.