IE, XDomainRequest not always work - javascript

I am trying to do cross-domain on IE.
I used XDomainRequest, and implanted logging for all events (onerror, onload, onprogress and ontimeout) to monitor the progress.
It works sometime, but not always (one computer, IE9, same site, same request, 1 out of 3 or 4 works; another computer, IE8, maybe 1 out of 2 works). I didn't get any useful information from the logging, because there was nothing triggered.
I am very confused. Any debugging tool for IE? Why some time XDomainRequest just doesn't work?
Thanks a lot
coronin

There are at least two significant bugs in the XDomainRequest object, one that affects IE8 and another that affects IE9.
Issue 1 - Garbage Collection
In Internet Explorer 8, the XDomainRequest object is incorrectly subject to garbage collection after send() has been called but not yet completed. The symptoms of this bug are the Developer Tools' network trace showing "Aborted" for the requests and none of the error, timeout, or success event handlers being called.
Typical AJAX code looks a bit like this:
function sendCrossDomainAjax(url, successCallback, errorCallback) {
var xdr = new XDomainRequest();
xdr.open("get", url);
xdr.onload = function() { successCallback(); }
xdr.onerror = function() { errorCallback(); }
xdr.send();
}
In this example, the variable containing XDomainRequest goes out of scope. If the user is unlucky, IE's Javascript garbage collector will run before send() asynchronously completes and the request will be aborted. Even though the XDomainRequest object can be captured into the OnLoad and OnError event handlers, IE will see that that entire object graph has no references to it and will garbage collect it. IE should be "pinning" the object until complete.
You'll notice quite a few other discussions on the internet mentioning that placing a setTimeout around the xdr.send(); call will somehow "solve" mysterious XDomainRequest failures. This is a kludge, and completely incorrect. All that's happening is that the XDomainRequest object is being "pinned" into the setTimeout closure and not subject to garbage collection as quickly. It doesn't solve the problem.
To correctly work around this issue, ensure the XDomainRequest is stored in a global variable until the request completes. For example:
var pendingXDR = [];
function removeXDR(xdr) {
// indexOf isn't always supported, you can also use jQuery.inArray()
var index = pendingXDR.indexOf(xdr);
if (index >= 0) {
pendingXDR.splice(index, 1);
}
}
function sendCrossDomainAjax(url, successCallback, errorCallback) {
var xdr = new XDomainRequest();
xdr.open("get", url);
xdr.onload = function() {
removeXDR(xdr);
successCallback();
}
xdr.onerror = function() {
removeXDR(xdr);
errorCallback();
}
xdr.send();
pendingXDR.push(xdr);
}
Issue 2 - Missing OnProgress EventHandler
This second issue is already known. Internet Explorer 9 introduced a regression in the XDomainRequest object where a missing (null) OnProgress event handler would cause the request to abort when it tries to report progress information.
For fast requests, IE9 never attempts to call the OnProgress event handler and the request succeeds. Certain conditions, such as when IE delays the request due to too many open connections, network latency, slow server responses, or large request or response payloads will cause IE9 to start to report progress information.
IE9 tries to call the event handler without first checking it exists, and the XDomainRequest object crashes and destroys itself internally.
To solve this issue, always ensure an event handler is attached to OnProgress. Given the bug, it's not a bad idea to defensively add event handlers to all of the object's events.
var xdr = new XDomainRequest();
xdr.open("get", url);
xdr.onprogress = function() { };
// regsister other event handlers
Other Issues
I've seem reports that XDomainRequest can fail if the event handlers are registered before .open() is called. Again, defensively, it's not a bad idea to register them between the .open() and .send() calls. I haven't personally verified whether it's an actual bug.
If you run into an "Access Denied" error, it's because XDomainRequest doesn't allow mismatched URI schemes between the target and host page. In other words, try don't call an HTTP resource from an HTTPS page.
Beware most of the XDomainRequest libraries on the internet. I looked at most of the popular ones, such as the various jQuery AJAX transport plugins (including the ones linked in another answer here).
And, of course, XDomainRequest is subject to all of it's normal limitations and constraints. These aren't bugs per-se, and compared with the alernatives (iframe kludges, Flash crossdomain.xml transports) they're not that bad.
I've posted a new jQuery AJAX XDomainRequest transport under a public domain license here: https://github.com/ebickle/snippets/tree/master/javascript/xdomainrequest

Had the exact same question. Short solution:
Use this code: https://github.com/jaubourg/ajaxHooks/blob/master/src/ajax/xdr.js
UPDATE: link is broken, find jaubourgs fix here: https://github.com/jaubourg/ajaxHooks/blob/master/src/xdr.js
Add xdr.onprogress = function() {}; in that xdr.js file
Details can be found on the jQuery topic discussion here
http://bugs.jquery.com/ticket/8283
in which the last reply included that xdr.onprogress fix which originated from this bug discussion which was aptly titled
"IE9 RTM - XDomainRequest issued requests may abort if all event handlers not specified"
http://social.msdn.microsoft.com/Forums/en-US/iewebdevelopment/thread/30ef3add-767c-4436-b8a9-f1ca19b4812e

Related

XMLHttpRequest returning with status 200, but 'onreadystatechange' event not fired

We have been receiving an intermittent bug with the XMLHttpRequest object when using IE11. Our codebase is using legacy architecture, so this browser is required.
After clicking a button, the browser launches an out-of-band process by creating a new ActiveX control which integrates with a camera to capture an image. This control appears to be working fine... it allows the operator to capture the image, and the Base64 content of the image is returned out of the control back to the browser interface, so I think we can rule out a problem with this object.
Once the image is returned to the browser, the browser performs an asynchronous 'ping' to the web server to check if the IIS session is still alive or it has expired (because the out-of-band image capture process forbids control of the browser while it is open).
The ping to the server returns successfully (and running Fiddler I can see that the response has status 200), with the expected response data:
<sessionstate>ok</sessionstate>
There is a defined 'onreadystatechange' function which should be fired on this response, and the majority of times this seems to fire correctly. However, on the rare occasion it does appear, it continues to happen every time.
Here is a snippet of the code... we expect the 'callback()' function to be called on a successful response to Timeout.asp:
XMLPoster.prototype.checkSessionAliveAsync = function(callback) {
var checkSessionAlive = new XMLHttpRequest();
checkSessionAlive.open("POST", "Timeout.asp?Action=ping", true);
checkSessionAlive.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
checkSessionAlive.onreadystatechange = function() {
if (checkSessionAlive.readyState == 4) {
if (checkSessionAlive.responseText.indexOf("expired") != -1 || checkSessionAlive.status !== 200) {
eTop.window.main.location = "timeout.asp";
return;
}
callback(checkSessionAlive.responseText);
}
}
checkSessionAlive.send();
}
Has anyone seen anything like this before? I appreciate that using legacy software is not ideal, but we are currently limited to using it.

Suppressing HTMLImageElement onerror [duplicate]

This question already has an answer here:
Check if file exists but prevent 404 error in console from showing up [duplicate]
(1 answer)
Closed 8 years ago.
I'm running a script that is dynamically loading images depending on a few criteria. The script does not know beforehand whether a specific image source actually exists and will thus need to check before displaying the image. I do this by replacing the onerror handler on the element with a function that attempts to gracefully handle the event.
At first glance this works rather well, however even though I have replaced the event, the browser still audits 404 errors in the console which I don't want. Even worse is that IE displays the infamous JS error icon in the status bar which I find rather awkward.
I've tried to summarise the problem in a JSFiddle.
var img = new Image();
img.onerror = function (ev) {
alert("This does not exist!");
return true;
}
img.onload = function (ev) {
document.body.appendChild(this);
}
img.src = "foo.png"; //Missing image
Basically, I want to suppress all error reporting for this element such that the console doesn't get flooded with superfluous error output.
I know that I could solve this by prefetching and evaluating the HTTP headers with AJAX and server side scripting, which while technically a possible solution, is something I would prefer to avoid. However, while I only use standard JS in my example, JQuery code is also acceptable.
I have tried reading up on the event specification, but since web scripting is still the mess of confusing ECMAScript, HTML DOM, client, pixy dust and now HTML5 definitions that we all love to hate, it really didn't make me any wiser. The closest I got was Mozilla's documentation (that interestingly doesn't even state the correct function parameters) which suggested that letting the event return true would suppress errors, but that didn't really work.
I believe you can not check if image link is broken/does not exist without getting 404 error. Which is actually is information about link is broken.
You mentioned that other way is ajax to check existance...
function UrlExists(url) {
var http = new XMLHttpRequest();
http.open('HEAD', url, false);
http.send();
return http.status != 404;
}
UrlExists('img_url');
but still you will get 404 in console.

How synchronous AJAX call could cause memory leak?

I understand this general advice given against the use of synchronous ajax calls, because the synchronous calls block the UI rendering.
The other reason generally given is memory leak isssues with synchronous AJAX.
From the MDN docs -
Note: You shouldn't use synchronous XMLHttpRequests because, due to
the inherently asynchronous nature of networking, there are various
ways memory and events can leak when using synchronous requests. The
only exception is that synchronous requests work well inside Workers.
How synchronous calls could cause memory leaks?
I am looking for a practical example.
Any pointers to any literature on this topic would be great.
If XHR is implemented correctly per spec, then it will not leak:
An XMLHttpRequest object must not be garbage collected if its state is
OPENED and the send() flag is set, its state is HEADERS_RECEIVED, or
its state is LOADING, and one of the following is true:
It has one or more event listeners registered whose type is
readystatechange, progress, abort, error, load, timeout, or loadend.
The upload complete flag is unset and the associated
XMLHttpRequestUpload object has one or more event listeners registered
whose type is progress, abort, error, load, timeout, or loadend.
If an XMLHttpRequest object is garbage collected while its connection
is still open, the user agent must cancel any instance of the fetch
algorithm opened by this object, discarding any tasks queued for them,
and discarding any further data received from the network for them.
So after you hit .send() the XHR object (and anything it references) becomes immune to GC. However, any error or success will put the XHR into DONE state and it becomes subject to GC again. It wouldn't matter at all if the XHR object is sync or async. In case of a long sync request again it doesn't matter because you would just be stuck on the send statement until the server responds.
However, according to this slide it was not implemented correctly at least in Chrome/Chromium in 2012. Per spec, there would be no need to call .abort() since the DONE state means that the XHR object should already be normally GCd.
I cannot find even slightest evidence to back up the MDN statement and I have contacted the author through twitter.
I think that memory leaks are happening mainly because the garbage collector can't do its job. I.e. you have a reference to something and the GC can not delete it. I wrote a simple example:
var getDataSync = function(url) {
console.log("getDataSync");
var request = new XMLHttpRequest();
request.open('GET', url, false); // `false` makes the request synchronous
try {
request.send(null);
if(request.status === 200) {
return request.responseText;
} else {
return "";
}
} catch(e) {
console.log("!ERROR");
}
}
var getDataAsync = function(url, callback) {
console.log("getDataAsync");
var xhr = new XMLHttpRequest();
xhr.open("GET", url, true);
xhr.onload = function (e) {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
callback(xhr.responseText);
} else {
callback("");
}
}
};
xhr.onerror = function (e) {
callback("");
};
xhr.send(null);
}
var requestsMade = 0
var requests = 1;
var url = "http://missing-url";
for(var i=0; i<requests; i++, requestsMade++) {
getDataSync(url);
// getDataAsync(url);
}
Except the fact that the synchronous function blocks a lot of stuff there is another big difference. The error handling. If you use getDataSync and remove the try-catch block and refresh the page you will see that an error is thrown. That's because the url doesn't exist, but the question now is how garbage collector works when an error is thrown. Is it clears all the objects connected with the error, is it keeps the error object or something like that. I'll be glad if someone knows more about that and write here.
If the synchronous call is interrupted (i.e. by a user event re-using the XMLHttpRequest object) before it completes, then the outstanding network query can be left hanging, unable to be garbage collected.
This is because, if the object that initiated the request does not exist when the request returns, the return cannot complete, but (if the browser is imperfect) remains in memory. You can easily cause this using setTimeout to delete the request object after the request has been made but before it returns.
I remember I had a big problem with this in IE, back around 2009, but I would hope that modern browsers are not susceptible to it. Certainly, modern libraries (i.e. JQuery) prevent the situations in which it might occur, allowing requests to be made without having to think about it.
Sync XHR block thread execution and all objects in function execution stack of this thread from GC.
E.g.:
function (b) {
var a = <big data>;
<work with> a and b
sync XHR
}
Variables a and b are blocked here (and whole stack too).
So, if GC started working then sync XHR has blocked stack, all stack variables will be marked as "survived GC" and be moved from early heap to the more persistent. And a tone of objects that should not survive even the single GC will live many Garbage Collections and even references from these object will survive GC.
About claims stack blocks GC, and that object marked as long-live objects: see section Conservative Garbage Collection in
Clawing Our Way Back To Precision.
Also, "marked" objects GCed after the usual heap is GCed, and usually only if there is still need to free more memory (as collecting marked-and-sweeped objs takes more time).
UPDATE:
Is it really a leak, not just early-heap ineffective solution?
There are several things to consider.
How long these object will be locked after request is finished?
Sync XHR can block stack for a unlimited amount of time, XHR has no timeout property (in all non-IE browsers), network problems are not rare.
How much UI elements are locked? If it block 20M of memory for just 1 sec == 200k lead in a 2min. Consider many background tabs.
Consider case when single sync blocks tone of resources and browser
goes to swap file
When another event tries to alter DOM in may be blocked by sync XHR, another thread is blocked (and whole it's stack too)
If user will repeat the actions that lead to the sync XHR, the whole browser window will be locked. Browsers uses max=2 thread to handle window events.
Even without blocking this consumes lots of OS and browser internal resources: thread, critical section resources, UI resources, DOM ... Imagine that your can open (due to memory problem) 10 tabs with sites that use sync XHR and 100 tabs with sites that use async XHR. Is not this memory leak.
Memory leaks using syncronous AJAX requests are often caused by:
using setInterval/setTimout causing circular calls.
XmlHttpRequest - when the reference is removed, so xhr becomes inaccessible
Memory leak happens when the browser for some reason doesn’t release memory from objects which are not needed any more.
This may happen because of browser bugs, browser extensions problems and, much more rarely, our mistakes in the code architecture.
Here's an example of a memory leak being cause when running setInterval in a new context:
var
Context = process.binding('evals').Context,
Script = process.binding('evals').Script,
total = 5000,
result = null;
process.nextTick(function memory() {
var mem = process.memoryUsage();
console.log('rss:', Math.round(((mem.rss/1024)/1024)) + "MB");
setTimeout(memory, 100);
});
console.log("STARTING");
process.nextTick(function run() {
var context = new Context();
context.setInterval = setInterval;
Script.runInContext('setInterval(function() {}, 0);',
context, 'test.js');
total--;
if (total) {
process.nextTick(run);
} else {
console.log("COMPLETE");
}
});

Firefox not starting onreadystatechange function

I made some javascript code for my website, it works without problem on opera and chrome, but not on firefox.
Here is script:
function checkstate(who,row,cell) {
var zwrot="";
var mouseEvent='onmouseover="javascript:bubelon(this.id);" onmouseout="bubeloff();"';
var cellid="";
ajax=new XMLHttpRequest();
ajax.onreadystatechange=function(aEvt) {
if (ajax.readyState===4 && ajax.status===200) {
alert("im here!");
}
};
ajax.open('GET',"oth/work_prac_stan.php?usr="+who,false);
ajax.send();
}
function sprawdzstan() {
var lol="";
var table = document.getElementById("usery");
var re = /^<a\shref\=/g;
for (var i = 1, row; row = table.rows[i]; i ++) {
if (row.cells[0].innerHTML.match(re)) {
checkstate(row.cells[1].innerHTML,row,2);
} else {
checkstate(row.cells[0].innerHTML,row,1);
}
}
}
The problem is, that firefox is not running function assigned to onreadystatechange. I checked in firebug, that response from php file is correct.
Where is the problem? It works on chrome and opera, firefox just dont, no error in console, nothing.
Updated answer
According to Mozilla's docs, you don't use onreadystatechange with synchronous requests. Which kind of makes sense, since the request doesn't return until the ready state is 4 (completed), though I probably wouldn't have designed it that way.
Original answer
Not immediately seeing a smoking gun, but: Your ajax variable is not defined within the function, and so you're almost certainly overwriting it on every iteration of the loop in sprawdzstan. Whether that's a problem remains to be seen, since you're using a synchronous ajax call. In any case, add a var ajax; to checkstate to ensure that you're not falling prey to the Horror of Implicit Globals.
Off-topic: If you can possibly find a way to refactor your design to not use a synchronous ajax request, strongly recommend doing that. Synchronous requests lock up the UI of the browser (to a greater or lesser degree depending on the browser, but many — most? — completely lock up, including other unrelated tabs). It's almost always possible to refactor and use an asynchronous request instead.
Off-topic 2: You aren't using mouseEvent in your code, but if you were, you would want to get rid of those javascript: prefixes on the onmouseover and onmouseout attributes. Those attributes are not URLs, the prefix is not (there) a protocol specifier (it's a label, which you're not using).
For those who still encounter this problem...
You can use the below code. What I did is remove the function
ajax.onreadystatechange=function(aEvt) {
and transfer the alert("im here!"); after the ajax.send();
ajax=new XMLHttpRequest();
ajax.open('GET',"oth/work_prac_stan.php?usr="+who,false);
ajax.send();
alert("im here!");

javascript/dashcode: check for internet connection

i'm developing a widget that is fetching data from the internet via ajax and i want to provide a error message, if the widget cannot connect to the server. i'm doing the request with jquery's ajax object which provides a error callback function, but it's not called when there is no internet connection, only if the request is made but fails for other reasons.
now how can i check if the computer is connected to the internet?
UPDATE: Since you are creating a Dashboard widget, I ran a number of tests.
I found that the $.ajax call actually triggered an error when there was no internet connection. So I went about creating a XMLHTTPRequest object manually with great success. If you need JSON parsing, I suggest also including the json2.js parser.
Things I did to make this work:
In Widget Attributes in Dashcode I clicked "Allow Network Access" (If you aren't using Dashcode, check the docs for the proper plist setting to turn this on)
I used the following code:
var xhr = new XMLHttpRequest();
xhr.addEventListener('readystatechange', state_change, true);
xhr.open("GET", url, true);
xhr.send(null);
function state_change(){
if(xhr.readyState == 4){
if(xhr.status == 200){
console.log('worked'); // Only works if running in Dashcode
// use xhr.responseText or JSON.parse(xhr.responseText)
} else if(xhr.status == 0) {
console.log('no internet'); // Only works if running in Dashcode
} else {
// Some other error
}
}
}
/End Update
I answered this by editing my answer to your original question since you asked it in the comments. After commenting I saw you posted this question.
To summarize, add the timeout parameter to your $.ajax call and set it to a low number (like 5000 milliseconds). Your error function will be called after the request times out.
in your error function, the second argument is status, check to see if that == "timeout", if it does, you couldn't reach the webservice (or whatever you're connecting to), regardless of whether you have internet access or not, I'm assuming that's what you care about.
$.ajax({
/* your other params here*/
error: function (req, status, error) {
if(status == "timeout") alert("fail!");
},
timeout: 2000 //2 seconds
});
See the sections on timeout and error here.
Just one line
if(window.navigator.onLine)
{
// You are connected to internet
}
else
{
// You are not connected to internet
}
window.navigator.onLine returns true or false.
Tested on IE 8 and Mozilla Firefox 3.5.7. Please check on other older browsers.
You could just write it with standard, easy to understand javascript code. I have not developed any 'widgets', just internet iphone apps, but it should still work. Here you go:
var online = window.navigator.onLine;
if (!online){
alert('You are not currently connected to the internet. Please try again later.');
}
-Connor
One idea...
Set a javascript timer. If the ajax call is successful, clear the timer. If the timer triggers, that is your indication that the request failed.
As a side note...
It's tough to tell if a computer is on the internet, because for most computers, the internet starts at the switch >> router >> modem >> router >> etc... Where it is "broken" is usually several hops out, and the only way (I know of) to know if you are online is to "try".

Categories

Resources