I'm getting a stack overflow error on some, but not all, IE7 machines.
This function downloads a bunch URL-based resources and does nothing with them. It runs on my login page and its purpose is to fetch static content while your typing in your credentials, so that when you really need it, the browser can get it from its local cache.
// Takes an array of resources URLs and preloads them sequentially,
// but asynchronously, using an XHR object.
function preloadResources(resources) {
// Kick it all off.
if (resources.length > 0) {
var xhr = getXHRObject(); // Prepare the XHR object which will be reused for each request.
xhr.open('GET', resources.shift(), true);
xhr.onreadystatechange = handleReadyStateChange;
xhr.send(null);
}
// Handler for the XHR's onreadystatechange event. Loads the next resource, if any.
function handleReadyStateChange() {
if (xhr.readyState == 4) {
if (resources.length > 0) {
xhr.open('GET', resources.shift(), true);
xhr.onreadystatechange = arguments.callee;
xhr.send(null);
}
}
}
// A safe cross-browser way to get an XHR object.
function getXHRObject() {
// Clipped for clarity.
}
} // End preloadResources().
It's called like this:
preloadResources([
'http://example.com/big-image.png',
'http://example.com/big-stylesheet.css',
'http://example.com/big-script.js']);
It recursively processes an array of URLs. I thought it wasn't susceptible to stack overflow errors because each recursion is called from an asynchronous event -- the XHR's onreadystatechange event (notice that I'm calling xhr.open() asynchronously). I felt doing so would prevent it from growing the stack.
I don't see how the stack is growing out of control? Where did I go wrong?
Doing the recursion with a timer prevented the stack overflow problem from appearing.
// Handler for the XHR's onreadystatechange event. Loads the next resource, if any.
function handleReadyStateChange() {
if (xhr.readyState == 4 && resources.length > 0) {
setTimeout(function() {
xhr.open('GET', resources.shift(), true);
xhr.onreadystatechange = handleReadyStateChange;
xhr.send(null);
}, 10);
}
}
I guess chaining XHR requests to one another consumes the stack. Chaining them together with a timer prevents that -- at least in IE7. I haven't seen the problem on other browsers, so I can't tell.
Can you perhaps console log or alert the value of arguments.callee ? I'm curious what would happen if it resolves to the arguments variable from the preloadResources() function instead of handleReadyStateChange(). Doesn't seem likely to me, but it jumps out at me just eyeballing your code.
In answer to your question, however - I think one bad practice in the code above is reusing the XmlHttpRequest object, especially without ever letting the lifecycle complete or calling xhr.abort(). That's a no-no I tucked away a while back. It's discussed here and various places around the web. Note that IE particularly doesn't work well with reusing the xhr. See http://ajaxian.com/archives/the-xmlhttprequest-reuse-dilemma .
Hope this helps,
Scott
Related
I am new to JavaScript and would like to ask about about AJAX, that is, why we put xhr.onload before xhr.send() since even if I put xhr.onload after xhr.send() all works perfectly. But majority of tutorials teach you to put onload before send() without proper explanation. So, should I use
let btn = document.querySelector('button').addEventListener('click', function(){
let xhr = new XMLHttpRequest();
xhr.onload=function(){
if(this.status===200){
let div=document.querySelector('div').innerHTML=xhr.responseText;
}
}
xhr.open('GET', './mir.txt');
xhr.send();
})
Or
let btn = document.querySelector('button').addEventListener('click', function(){
let xhr = new XMLHttpRequest();
xhr.open('GET', './mir.txt');
xhr.send();
xhr.onload=function(){
if(this.status===200){
let div=document.querySelector('div').innerHTML=xhr.responseText;
}
}
})
and WHY?
Use your first version.
Logically, in the situation where the browser has the response cached, then XHR could complete instantly, and then you try to add "onload" after the response has already loaded, then nothing will happen.
In reality, even when cached, I don't think this can happen because of how the browser engine works, but from the coding point of view, it looks like it could happen. So making the pattern have onload at the top removes all suspicion that such behaviour could occur. Possibly in older browsers, when people did tend to do XHR reuests manually, that kind of thing was an actual danger?
I do know, in the scenario where you syncronously load a request, it does matter, because the thread will be blocked (as well as the whole window) during the send until it completes.
Onload is most frequently used within the element to perform a script once a website has fully loaded (including script files, images, CSS files, etc.) So Its a good approach lets load all the dependency after that we make a API call or Ajax call to update our DOM.
You don't have to use onload() before send(), your first and second example shows that already.
onload() is an event of XHR Object (a property for the load event of XHR), so it'll execute automatically when a particular event satisfies during the XHR execution. The function called when an XMLHttpRequest transaction completes successfully. So, Using onload() property you just define/tell what needs to do when load event satisfied. You don't have to define if you don't need it.
send() is a method of XHR, not an event. so you need to call it if you want it. see reference link to see more about its behavior for synchronous and asynchronous call.
Ref:
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/send
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequestEventTarget/onload
I've been teaching myself php and xml (among other languages) and while going through the xmlhttprequest tutorial on w3schools, I noticed the open() and send() functions are located at the end of the function and not before or out of it. This is a bit puzzling because how can one get a response from the server if the request has not yet been sent? It maybe be something simple that I have missed and I apologize if that's the case but can anyone help me with my dilemma? Thanks in advance
function loadDoc() {
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
document.getElementById("demo").innerHTML = this.responseText;
}
};
xhttp.open("GET", "ajax_info.txt", true);
xhttp.send();
}
The code to read the data will be in a separate function that is assigned as an event handler.
That function won't run until the response is received.
Related: How do I return the response from an asynchronous call?
It probably doesn't matter on modern browsers, but fundamentally, it's so you set up the call before sending it. In particular, so that all the handlers have been attached before asking the XHR to do something. Compare:
// Example 1
var xhr = new XMLHttpRequest();
xhr.addEventListener("load", function() {
// Got the data
});
xhr.open("GET", "http://example.com");
xhr.send();
with
// Example 2
var xhr = new XMLHttpRequest();
xhr.open("GET", "http://example.com");
xhr.send();
xhr.addEventListener("load", function() {
// Got the data
});
Browsers are not single-threaded, though they run JavaScript code in a single main thread (plus any web workers you create). So it's possible (though extraordinarily unlikely) that with Example 2, if the resource is in the browser's cache, a separate thread handling network calls could trigger the load event between the send and addEventListener calls on the JavaScript thread, see that there were no handlers registered, and not queue a task for the event loop to call the handler. Whereas with Example 1, if it triggers the load event immediately upon send, it sees an attached handler and queues a task to call it (which runs later, when the event loop comes around to processing that task).
Here's an example of that hypothetical scenario, showing the thread interaction:
Example 1 - Highly Theoretical Scenario
JavaScript Thread Network Thread
----------------------------------------- --------------
var xhr = new XMLHttpRequest();
xhr.addEventListener("load", function() {
// Got the data
});
xhr.open("GET", "http://example.com");
xhr.send();
(Within send: Start the send, handing
off to the network thread)
1. Start a GET on `xhr`
2. It's in cache, are there any load
handlers registered on `xhr`?
3. Yes, queue a task to call the handler
(Done with current task)
(Pick up next task)
Call the handler
vs
Example 2 - Highly Theoretical Scenario
JavaScript Thread Network Thread
----------------------------------------- --------------
var xhr = new XMLHttpRequest();
xhr.open("GET", "http://example.com");
xhr.send();
(Within send: Start the send, handing
off to the network thread)
1. Start a GET on `xhr`
2. It's in cache, are there any load
handlers registered on `xhr`?
3. No, don't queue a task to call
the handler
xhr.addEventListener("load", function() {
// Got the data
});
(Done with current task)
(Pick up next task)
(No task, do nothing)
I very much doubt any current browser would actually do that (Example 2), and I'm not aware of XHR having had a problem like this in the past. But it's theoretically possible, and there was a very similar problem, circa 2008, with setting src on an img element before hooking the load event for it. However, browsers fixed that problem, and I'd be surprised to find they were open to the Example 2 scenario above now, either, even if they or may not have been at some point in the past.
In practice, I doubt it matters. But I'd still use Example 1 if I used XMLHttpRequest (I don't, I use fetch).
I understand this general advice given against the use of synchronous ajax calls, because the synchronous calls block the UI rendering.
The other reason generally given is memory leak isssues with synchronous AJAX.
From the MDN docs -
Note: You shouldn't use synchronous XMLHttpRequests because, due to
the inherently asynchronous nature of networking, there are various
ways memory and events can leak when using synchronous requests. The
only exception is that synchronous requests work well inside Workers.
How synchronous calls could cause memory leaks?
I am looking for a practical example.
Any pointers to any literature on this topic would be great.
If XHR is implemented correctly per spec, then it will not leak:
An XMLHttpRequest object must not be garbage collected if its state is
OPENED and the send() flag is set, its state is HEADERS_RECEIVED, or
its state is LOADING, and one of the following is true:
It has one or more event listeners registered whose type is
readystatechange, progress, abort, error, load, timeout, or loadend.
The upload complete flag is unset and the associated
XMLHttpRequestUpload object has one or more event listeners registered
whose type is progress, abort, error, load, timeout, or loadend.
If an XMLHttpRequest object is garbage collected while its connection
is still open, the user agent must cancel any instance of the fetch
algorithm opened by this object, discarding any tasks queued for them,
and discarding any further data received from the network for them.
So after you hit .send() the XHR object (and anything it references) becomes immune to GC. However, any error or success will put the XHR into DONE state and it becomes subject to GC again. It wouldn't matter at all if the XHR object is sync or async. In case of a long sync request again it doesn't matter because you would just be stuck on the send statement until the server responds.
However, according to this slide it was not implemented correctly at least in Chrome/Chromium in 2012. Per spec, there would be no need to call .abort() since the DONE state means that the XHR object should already be normally GCd.
I cannot find even slightest evidence to back up the MDN statement and I have contacted the author through twitter.
I think that memory leaks are happening mainly because the garbage collector can't do its job. I.e. you have a reference to something and the GC can not delete it. I wrote a simple example:
var getDataSync = function(url) {
console.log("getDataSync");
var request = new XMLHttpRequest();
request.open('GET', url, false); // `false` makes the request synchronous
try {
request.send(null);
if(request.status === 200) {
return request.responseText;
} else {
return "";
}
} catch(e) {
console.log("!ERROR");
}
}
var getDataAsync = function(url, callback) {
console.log("getDataAsync");
var xhr = new XMLHttpRequest();
xhr.open("GET", url, true);
xhr.onload = function (e) {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
callback(xhr.responseText);
} else {
callback("");
}
}
};
xhr.onerror = function (e) {
callback("");
};
xhr.send(null);
}
var requestsMade = 0
var requests = 1;
var url = "http://missing-url";
for(var i=0; i<requests; i++, requestsMade++) {
getDataSync(url);
// getDataAsync(url);
}
Except the fact that the synchronous function blocks a lot of stuff there is another big difference. The error handling. If you use getDataSync and remove the try-catch block and refresh the page you will see that an error is thrown. That's because the url doesn't exist, but the question now is how garbage collector works when an error is thrown. Is it clears all the objects connected with the error, is it keeps the error object or something like that. I'll be glad if someone knows more about that and write here.
If the synchronous call is interrupted (i.e. by a user event re-using the XMLHttpRequest object) before it completes, then the outstanding network query can be left hanging, unable to be garbage collected.
This is because, if the object that initiated the request does not exist when the request returns, the return cannot complete, but (if the browser is imperfect) remains in memory. You can easily cause this using setTimeout to delete the request object after the request has been made but before it returns.
I remember I had a big problem with this in IE, back around 2009, but I would hope that modern browsers are not susceptible to it. Certainly, modern libraries (i.e. JQuery) prevent the situations in which it might occur, allowing requests to be made without having to think about it.
Sync XHR block thread execution and all objects in function execution stack of this thread from GC.
E.g.:
function (b) {
var a = <big data>;
<work with> a and b
sync XHR
}
Variables a and b are blocked here (and whole stack too).
So, if GC started working then sync XHR has blocked stack, all stack variables will be marked as "survived GC" and be moved from early heap to the more persistent. And a tone of objects that should not survive even the single GC will live many Garbage Collections and even references from these object will survive GC.
About claims stack blocks GC, and that object marked as long-live objects: see section Conservative Garbage Collection in
Clawing Our Way Back To Precision.
Also, "marked" objects GCed after the usual heap is GCed, and usually only if there is still need to free more memory (as collecting marked-and-sweeped objs takes more time).
UPDATE:
Is it really a leak, not just early-heap ineffective solution?
There are several things to consider.
How long these object will be locked after request is finished?
Sync XHR can block stack for a unlimited amount of time, XHR has no timeout property (in all non-IE browsers), network problems are not rare.
How much UI elements are locked? If it block 20M of memory for just 1 sec == 200k lead in a 2min. Consider many background tabs.
Consider case when single sync blocks tone of resources and browser
goes to swap file
When another event tries to alter DOM in may be blocked by sync XHR, another thread is blocked (and whole it's stack too)
If user will repeat the actions that lead to the sync XHR, the whole browser window will be locked. Browsers uses max=2 thread to handle window events.
Even without blocking this consumes lots of OS and browser internal resources: thread, critical section resources, UI resources, DOM ... Imagine that your can open (due to memory problem) 10 tabs with sites that use sync XHR and 100 tabs with sites that use async XHR. Is not this memory leak.
Memory leaks using syncronous AJAX requests are often caused by:
using setInterval/setTimout causing circular calls.
XmlHttpRequest - when the reference is removed, so xhr becomes inaccessible
Memory leak happens when the browser for some reason doesn’t release memory from objects which are not needed any more.
This may happen because of browser bugs, browser extensions problems and, much more rarely, our mistakes in the code architecture.
Here's an example of a memory leak being cause when running setInterval in a new context:
var
Context = process.binding('evals').Context,
Script = process.binding('evals').Script,
total = 5000,
result = null;
process.nextTick(function memory() {
var mem = process.memoryUsage();
console.log('rss:', Math.round(((mem.rss/1024)/1024)) + "MB");
setTimeout(memory, 100);
});
console.log("STARTING");
process.nextTick(function run() {
var context = new Context();
context.setInterval = setInterval;
Script.runInContext('setInterval(function() {}, 0);',
context, 'test.js');
total--;
if (total) {
process.nextTick(run);
} else {
console.log("COMPLETE");
}
});
I'm using XMLHttpRequest on an embedded device that provides a non-standard extension to the API to allow manual cleanup of resources after the request has finished.
Can I assume that, for all cases (successful or otherwise, eg 404, DNS lookup failed, etc), a call to the send() method will eventually result in my onreadstatechange handler being called with readyState == 4?
Or, to put it another way, assuming that this implementation's XHR behaves similarly to that of the standard browsers in all other ways, will the following code always result in the destroy() method being called?
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
callback(xhr.responseText);
if (xhr.destroy) { // must call this to prevent memory leak
xhr.destroy();
}
}
};
xhr.open(method, url, true);
xhr.send(null);
No.
In some cases, for example when calling abort(), the state may terminate at UNSENT (3.6.5).
Even during "normal" operation, if an error occurs and an exception is thrown, then the state may terminate at something other than DONE.
Read the spec's section on states for more information.
I made some javascript code for my website, it works without problem on opera and chrome, but not on firefox.
Here is script:
function checkstate(who,row,cell) {
var zwrot="";
var mouseEvent='onmouseover="javascript:bubelon(this.id);" onmouseout="bubeloff();"';
var cellid="";
ajax=new XMLHttpRequest();
ajax.onreadystatechange=function(aEvt) {
if (ajax.readyState===4 && ajax.status===200) {
alert("im here!");
}
};
ajax.open('GET',"oth/work_prac_stan.php?usr="+who,false);
ajax.send();
}
function sprawdzstan() {
var lol="";
var table = document.getElementById("usery");
var re = /^<a\shref\=/g;
for (var i = 1, row; row = table.rows[i]; i ++) {
if (row.cells[0].innerHTML.match(re)) {
checkstate(row.cells[1].innerHTML,row,2);
} else {
checkstate(row.cells[0].innerHTML,row,1);
}
}
}
The problem is, that firefox is not running function assigned to onreadystatechange. I checked in firebug, that response from php file is correct.
Where is the problem? It works on chrome and opera, firefox just dont, no error in console, nothing.
Updated answer
According to Mozilla's docs, you don't use onreadystatechange with synchronous requests. Which kind of makes sense, since the request doesn't return until the ready state is 4 (completed), though I probably wouldn't have designed it that way.
Original answer
Not immediately seeing a smoking gun, but: Your ajax variable is not defined within the function, and so you're almost certainly overwriting it on every iteration of the loop in sprawdzstan. Whether that's a problem remains to be seen, since you're using a synchronous ajax call. In any case, add a var ajax; to checkstate to ensure that you're not falling prey to the Horror of Implicit Globals.
Off-topic: If you can possibly find a way to refactor your design to not use a synchronous ajax request, strongly recommend doing that. Synchronous requests lock up the UI of the browser (to a greater or lesser degree depending on the browser, but many — most? — completely lock up, including other unrelated tabs). It's almost always possible to refactor and use an asynchronous request instead.
Off-topic 2: You aren't using mouseEvent in your code, but if you were, you would want to get rid of those javascript: prefixes on the onmouseover and onmouseout attributes. Those attributes are not URLs, the prefix is not (there) a protocol specifier (it's a label, which you're not using).
For those who still encounter this problem...
You can use the below code. What I did is remove the function
ajax.onreadystatechange=function(aEvt) {
and transfer the alert("im here!"); after the ajax.send();
ajax=new XMLHttpRequest();
ajax.open('GET',"oth/work_prac_stan.php?usr="+who,false);
ajax.send();
alert("im here!");