Avoid XMLHttpRequest chain calling leak - javascript

In my code (a monitoring application) I need to periodically call the server with an XMLHttpRequest object in the form of chained calls. Each call takes exactly 15 seconds, which is timed by the server as it delivers several partial results within that period (HTTP 100 Continue). Immediately after finishing the current call, the onreadystatechange event handler of the current XMLHttpRequest object needs to create and launch the next request (with a new instance), so the communication with the server remains almost seamless.
The way it works, each call retains the object context of the caller in the stack, so as this is a page that must remain open for days, the stack keeps growing with no chance for the garbage collector to claim the data. See the following stack trace:
I cannot use timers (setInterval or such) to launch the next request. It should be launched from inside the ending of the previous one. The data from server must arrive as quickly as possible, and unfortunately browsers nowadays throtle timers when a page is not in focus. As I said, this is a monitoring application meant to be always on in the users' secondary monitors (rarely in focus). I also need to deal with HTTP timeouts and other kinds of errors that derail from the 15 second sequence. There should always be one and only one channel open with the server.
My question is whether is any way to avoid keeping the whole context in the stack when creating an XMLHttpRequest object. Even calling the click() method on a DOM object will keep the stack/context alive. Even promises seem to keep the context.
I'm also unable to use websockets, as the server does not support them.
UPDATE:
It's more complex, buy in essence it's like:
var xhttpObjUrl;
var xhttpObj;
onLoad() {
loadXMLDoc(pollURL + "first=1", true);
}
function loadXMLDoc(url, longtout) {
xhttpObjUrl = url;
xhttpObj = new XMLHttpRequest();
xhttpObj.open(method, url, true);
xhttpObj.onprogress = progress;
xhttpObj.onloadend = progress;
xhttpObj.ontimeout = progress;
if (commlog) consolelog("loadXMLDoc(): url == " + dname);
xhttpObj.send("");
}
function progress() {
if (!xhttpObj) return;
var state = xhttpObj.readyState;
var status;
var statusText;
if (state == 4 /* complete */ || state == 3 /* partial content */) {
try {
status = xhttpObj.status;
statusText = xhttpObj.statusText;
if (status == 200) parseServerData();
} catch (err) {
status = 500;
statusText = err;
}
if (state == 4 || status != 200) {
/* SERVER TERMINATES THE CONNECTION AFTER 15 SECONDS */
/* ERROR HANDLING REMOVED */
var obj = xhttpObj;
xhttpObj = undefined;
abortRequest(obj);
obj = false;
RequestEnd();
}
}
}
function RequestEnd(error) {
var now = (new Date).getTime();
var msdiff = now - lastreqstart;
var code = function () { loadXMLDoc(pollURL + 'lastpoint=' + evtprev.toString() + '&lastevent=' + evtcurrent.toString()); return false; };
if (msdiff < 1000) addTimedCheck(1, code); /** IGNORE THIS **/
else code();
}

I've solved my problem using a web worker. The worker would end the XMLHttpRequest each time and send the page a message with the collected data. Then, when the page finishes processing the data, it would send the worker a message to start a new request. Thus my page wouldn't have any unwanted delays between requests, and there's no stack constantly building up. On error I'd terminate the worker and create a new one, just in case.

Related

How to determine that an SSE connection was closed?

We have an SSE (Server-Sent Events) connection open in JavaScript which can time to time get closed, either because of server restarts or other causes. In that case it would be good to reestablish the connection. How to do it? Is there a way to find out on the client side that the connection was closed?
Here: https://developer.mozilla.org/en-US/docs/Web/API/EventSource I found only a way to close the connection, but no callback or a test method for determining whether the connection is still alive.
Thank you for your help.
If the connection is closed (in such a way that the browser can realize it), it will auto-connect. And it tends to do this quickly (default is 3 seconds in Chrome, 5 seconds in Firefox). readyState will be CONNECTING (0) while it is doing this. It is only ever CLOSED (2) if there was some problem connecting in the first place (e.g. due to a CORS issue). Once CLOSED, it does not retry.
I prefer to add a keep-alive mechanism on top, as the browser cannot always detect dead sockets (not to mention a remote server process that is locked up, etc.). See ch.5 of Data Push Apps with HTML5 SSE for detailed code, but basically it involves having the server send a message every 15 seconds, then a JavaScript timer that runs for 20 seconds, but is reset each time a message is received. If the timer ever does expire, we close the connection and reconnect.
EventSource API Update
EventSource API now has three event handlers:
onerror
onmessage
onopen
These should be enough to handle everything you need on the client side.
Something like this:
const ssEvent = new EventSource( eventUrl );
ssEvent.onopen = function (evt) {
// handle newly opened connection
}
ssEvent.onerror = function (evt) {
// handle dropped or failed connection
}
ssEvent.onmessage = function (evt) {
// handle new event from server
}
Ref: mozilla.org : EventSource : Event handlers
Browser support for EventSource API: onopen - caniuse.com
Check readyState property:
var es = new EventSource();
// Сheck that connection is not closed
es.readyState !== 2;
// or
es.readyState !== EventSource.CLOSED;
It is best not to try to determine if the connection was closed. I do not think there is a way to do it. Server Side Events work differently in all of the browsers, but they all close the connection during certain circumstances. Chrome, for example, closes the connection on 502 errors while a server is restarted. So, it is best to use a keep-alive as others suggest or reconnect on every error. Keep-alive only reconnects at a specified interval that must be kept long enough to avoid overwhelming the server. Reconnecting on every error has the lowest possible delay. However, it is only possible if you take an approach that keeps server load to a minimum. Below, I demonstrate an approach that reconnects at a reasonable rate.
This code uses a debounce function along with reconnect interval doubling. It works well, connecting at 1 second, 4, 8, 16...up to a maximum of 64 seconds at which it keeps retrying at the same rate.
function isFunction(functionToCheck) {
return functionToCheck && {}.toString.call(functionToCheck) === '[object Function]';
}
function debounce(func, wait) {
var timeout;
var waitFunc;
return function() {
if (isFunction(wait)) {
waitFunc = wait;
}
else {
waitFunc = function() { return wait };
}
var context = this, args = arguments;
var later = function() {
timeout = null;
func.apply(context, args);
};
clearTimeout(timeout);
timeout = setTimeout(later, waitFunc());
};
}
// reconnectFrequencySeconds doubles every retry
var reconnectFrequencySeconds = 1;
var evtSource;
var reconnectFunc = debounce(function() {
setupEventSource();
// Double every attempt to avoid overwhelming server
reconnectFrequencySeconds *= 2;
// Max out at ~1 minute as a compromise between user experience and server load
if (reconnectFrequencySeconds >= 64) {
reconnectFrequencySeconds = 64;
}
}, function() { return reconnectFrequencySeconds * 1000 });
function setupEventSource() {
evtSource = new EventSource(/* URL here */);
evtSource.onmessage = function(e) {
// Handle even here
};
evtSource.onopen = function(e) {
// Reset reconnect frequency upon successful connection
reconnectFrequencySeconds = 1;
};
evtSource.onerror = function(e) {
evtSource.close();
reconnectFunc();
};
}
setupEventSource();

EventSource permanent auto reconnection

I am using JavaScript EventSource in my project front-end.
Sometimes, the connection between the browser and the server fails or the server crashes. In these cases, EventSource tries to reconnect after 3 seconds, as described in the documentation.
But it tries only once. If there is still no connection, the EventSource stops to try reconnection and the user have to refresh the browser window in order to be connected again.
How I can prevent this behavior? I need the EventSource to try reconnecting forever, not only once.
The browser is Firefox.
I deal with this by implementing a keep-alive system; if the browser reconnects for me that is all well and good, but I assume sometimes it won't work, and also that different browsers might behave differently.
I spend a fair few pages on this in chapter five of my book (Blatant plug, find it at O'Reilly here: Data Push Applications Using HTML5 SSE), but if you want a very simple solution that does not require any back-end changes, set up a global timer that will trigger after, say, 30 seconds. If it triggers then it will kill the EventSource object and create another one. The last piece of the puzzle is in your event listener(s): each time you get data from the back-end, kill the timer and recreate it. I.e. as long as you get fresh data at least every 30 seconds, the timer will never trigger.
Here is some minimal code to show this:
var keepAliveTimer = null;
function gotActivity(){
if(keepaliveTimer != null)clearTimeout(keepaliveTimer);
keepaliveTimer = setTimeout(connect, 30 * 1000);
}
function connect(){
gotActivity();
var es = new EventSource("/somewhere/");
es.addEventListener('message', function(e){
gotActivity();
},false);
}
...
connect();
Also note that I call gotActivity() just before connecting. Otherwise a connection that fails, or dies before it gets chance to deliver any data, would go unnoticed.
By the way, if you are able to change the back-end too, it is worth sending out a blank message (a "heartbeat") after 25-30 seconds of quiet. Otherwise the front-end will have to assume the back-end has died. No need to do anything, of course, if your server is sending out regular messages that are never more than 25-30 seconds apart.
If your application relies on the Event-Last-Id header, realize your keep-alive system has to simulate this; that gets a bit more involved.
In my experience, browsers will usually reconnect if there's a network-level error but not if the server responds with an HTTP error (e.g. status 500).
Our team made a simple wrapper library to reconnect in all cases: reconnecting-eventsource. Maybe it's helpful.
Below, I demonstrate an approach that reconnects at a reasonable rate, forever.
This code uses a debounce function along with reconnect interval doubling. During my testing, it works well. It connects at 1 second, 4, 8, 16...up to a maximum of 64 seconds at which it keeps retrying at the same rate.
function isFunction(functionToCheck) {
return functionToCheck && {}.toString.call(functionToCheck) === '[object Function]';
}
function debounce(func, wait) {
var timeout;
var waitFunc;
return function() {
if (isFunction(wait)) {
waitFunc = wait;
}
else {
waitFunc = function() { return wait };
}
var context = this, args = arguments;
var later = function() {
timeout = null;
func.apply(context, args);
};
clearTimeout(timeout);
timeout = setTimeout(later, waitFunc());
};
}
// reconnectFrequencySeconds doubles every retry
var reconnectFrequencySeconds = 1;
var evtSource;
var reconnectFunc = debounce(function() {
setupEventSource();
// Double every attempt to avoid overwhelming server
reconnectFrequencySeconds *= 2;
// Max out at ~1 minute as a compromise between user experience and server load
if (reconnectFrequencySeconds >= 64) {
reconnectFrequencySeconds = 64;
}
}, function() { return reconnectFrequencySeconds * 1000 });
function setupEventSource() {
evtSource = new EventSource(/* URL here */);
evtSource.onmessage = function(e) {
// Handle even here
};
evtSource.onopen = function(e) {
// Reset reconnect frequency upon successful connection
reconnectFrequencySeconds = 1;
};
evtSource.onerror = function(e) {
evtSource.close();
reconnectFunc();
};
}
setupEventSource();

Error: The page has been destroyed and can no longer be used

I'm developing an add-on for the first time. It puts a little widget in the status bar that displays the number of unread Google Reader items. To accommodate this, the add-on process queries the Google Reader API every minute and passes the response to the widget. When I run cfx test I get this error:
Error: The page has been destroyed and can no longer be used.
I made sure to catch the widget's detach event and stop the refresh timer in response, but I'm still seeing the error. What am I doing wrong? Here's the relevant code:
// main.js - Main entry point
const tabs = require('tabs');
const widgets = require('widget');
const data = require('self').data;
const timers = require("timers");
const Request = require("request").Request;
function refreshUnreadCount() {
// Put in Google Reader API request
Request({
url: "https://www.google.com/reader/api/0/unread-count?output=json",
onComplete: function(response) {
// Ignore response if we encountered a 404 (e.g. user isn't logged in)
// or a different HTTP error.
// TODO: Can I make this work when third-party cookies are disabled?
if (response.status == 200) {
monitorWidget.postMessage(response.json);
} else {
monitorWidget.postMessage(null);
}
}
}).get();
}
var monitorWidget = widgets.Widget({
// Mandatory widget ID string
id: "greader-monitor",
// A required string description of the widget used for
// accessibility, title bars, and error reporting.
label: "GReader Monitor",
contentURL: data.url("widget.html"),
contentScriptFile: [data.url("jquery-1.7.2.min.js"), data.url("widget.js")],
onClick: function() {
// Open Google Reader when the widget is clicked.
tabs.open("https://www.google.com/reader/view/");
},
onAttach: function(worker) {
// If the widget's inner width changes, reflect that in the GUI
worker.port.on("widthReported", function(newWidth) {
worker.width = newWidth;
});
var refreshTimer = timers.setInterval(refreshUnreadCount, 60000);
// If the monitor widget is destroyed, make sure the timer gets cancelled.
worker.on("detach", function() {
timers.clearInterval(refreshTimer);
});
refreshUnreadCount();
}
});
// widget.js - Status bar widget script
// Every so often, we'll receive the updated item feed. It's our job
// to parse it.
self.on("message", function(json) {
if (json == null) {
$("span#counter").attr("class", "");
$("span#counter").text("N/A");
} else {
var newTotal = 0;
for (var item in json.unreadcounts) {
newTotal += json.unreadcounts[item].count;
}
// Since the cumulative reading list count is a separate part of the
// unread count info, we have to divide the total by 2.
newTotal /= 2;
$("span#counter").text(newTotal);
// Update style
if (newTotal > 0)
$("span#counter").attr("class", "newitems");
else
$("span#counter").attr("class", "");
}
// Reports the current width of the widget
self.port.emit("widthReported", $("div#widget").width());
});
Edit: I've uploaded the project in its entirety to this GitHub repository.
I think if you use the method monitorWidget.port.emit("widthReported", response.json); you can fire the event. It the second way to communicate with the content script and the add-on script.
Reference for the port communication
Reference for the communication with postMessage
I guess that this message comes up when you call monitorWidget.postMessage() in refreshUnreadCount(). The obvious cause for it would be: while you make sure to call refreshUnreadCount() only when the worker is still active, this function will do an asynchronous request which might take a while. So by the time this request completes the worker might be destroyed already.
One solution would be to pass the worker as a parameter to refreshUnreadCount(). It could then add its own detach listener (remove it when the request is done) and ignore the response if the worker was detached while the request was performed.
function refreshUnreadCount(worker) {
var detached = false;
function onDetach()
{
detached = true;
}
worker.on("detach", onDetach);
Request({
...
onComplete: function(response) {
worker.removeListener("detach", onDetach);
if (detached)
return; // Nothing to update with out data
...
}
}).get();
}
Then again, using try..catch to detect this situation and suppress the error would probably be simpler - but not exactly a clean solution.
I've just seen your message on irc, thanks for reporting your issues.
You are facing some internal bug in the SDK. I've opened a bug about that here.
You should definitely keep the first version of your code, where you send messages to the widget, i.e. widget.postMessage (instead of worker.postMessage). Then we will have to fix the bug I linked to in order to just make your code work!!
Then I suggest you to move the setInterval to the toplevel, otherwise you will fire multiple interval and request, one per window. This attach event is fired for each new firefox window.

How to implement a getter-function (using callbacks)

I have to request data for a JS-script from a MySQL database (based upon a user-id).
I did not find a simple solution for JavaScript and it was not possible to load the data using ajax, because the database is available under a different domain.
I implemented a workaround using PHP and curl.
Now the JS has to "wait" for the request to finish, but the script is of course running asynchronously and does not wait for the response.
I know that it's not really possible to wait in JS, but it must be possible to return value like this.
I also tried using a return as another callback, but that didn't work of course, because the getter-function will run further anyway.
How can I implement a simple getter, which "waits" and returns the response from the HTTP-request?
Thanks for any other clues. I'm really lost at the moment.
This is a excerpt from the source code:
/**
* Simple getter which requests external data
*/
function simple_getter() {
// http request using a php script, because ajax won't work crossdomain
// this request takes some time. function finished before request is done.
/* Example */
var url = "http://example-url.com/get_data.php?uid=1234";
var response_callback = handle_result_response;
var value = send_request( url, response_callback );
value = value.split('*')[0];
if (value === '' || value == const_pref_none) {
return false;
}
/* 1. returns undefinied, because value is not yet set.
2. this as a callback makes no sense, because this function
will run asynchronous anyway. */
return value;
}
Additional information about the used functions:
/**
* Callback for the send_request function.
* basically returns only the responseText (string)
*/
function handle_result_response(req) {
// do something more, but basically:
return req.responseText;
}
/**
* Requests data from a database (different domain) via a PHP script
*/
function send_request( url, response_callback ) {
var req = createXMLHTTPObject();
if (!req)
return;
var method = (postData) ? "POST" : "GET";
req.open(method, url, true);
req.setRequestHeader('User-Agent','XMLHTTP/1.0');
// More not relevant source code
// ...
req.onreadystatechange = function () {
// More not relevant source code
// ...
response_callback(req);
}
if (req.readyState == 4)
return;
req.send(postData);
}
Not really relevant code, but required for the HTTP-request:
var XMLHttpFactories = [
function () {return new XMLHttpRequest()},
function () {return new ActiveXObject("Msxml2.XMLHTTP")},
function () {return new ActiveXObject("Msxml3.XMLHTTP")},
function () {return new ActiveXObject("Microsoft.XMLHTTP")}
];
function createXMLHTTPObject() {
var xmlhttp = false;
for (var i=0; i<XMLHttpFactories.length; i++) {
try {
xmlhttp = XMLHttpFactories[i]();
} catch (e) {
continue;
}
break;
}
return xmlhttp;
}
You really, really shouldn't try to synchronously wait for a network request to complete. The request may never complete, may hang and take a long time, and so on. Since JavaScript is single threaded, and in fact all major browser engines are single threaded, this will cause your entire page to hang while waiting for the request, and in some browsers, may cause the entire browser to hang.
What you should do is replace code like this:
var returned = some_request('http://example.com/query');
do_something_with(returned);
with code like this:
some_request('http://example.com/query', function (returned) {
do_something_with(returned);
});
That way, you will never cause your page or the browser to hang waiting for the request, and can simply do the work once the response comes in.
I don't see whats wrong with your code in general.
When you make a request, provide a Callback. When a response comes back, which you can easily detect, execute the Callback and pass it the result.
This is the way client side apps work.It is not procedural, but works by events.
You present the screen to the user and wait
The user makes an action
You call the server, set a callback and wait
The response come and you execute the callback and wait for another step 2
Rather than trying to change that, you need to fit with that or it will be a painful experience.
Javascript is not multithreaded. It means a single statement is run at a time. The real asynchronism come from the time the server takes to respond and call the callback. You never know which call will come first and need to build your program with that in mind.

Consecutive Ajax requests without jQuery/ JS library

I have an issue, mainly with IE.
I need to be able to handle n queries one after another. But If I simply call my function below in a for loop IE does some strange things (like loading only so many of the calls).
If I use an alert box it proves that the function gets all of the calls, and surprisingly IT WORKS!
My guess is that IE needs more time than other browsers, and the alert box does just that.
Here is my code:
var Ajax = function(all) {
this.xhr = new XMLHTTPREQUEST(); // Function returns xhr object/ activeX
this.uri = function(queries) { // Takes an object and formats query string
var qs = "", i = 0, len = size(queries);
for (value in queries) {
qs += value + "=" + queries[value];
if (++i <= len) { qs += "&"; }
}
return qs;
};
xhr.onreadystatechange = function() { // called when content is ready
if (this.readyState === 4) {
if (this.status === 200) {
all.success(this.responseText, all.params);
}
this.abort();
}
};
this.post = function() { // POST
xhr.open("POST", all.where, true);
xhr.setRequestHeader("Content-type","application/x-www-form-urlencoded");
xhr.send(uri(all.queries));
};
this.get = function() { // GET
xhr.open("GET", all.where + "?" + uri(all.queries), true);
xhr.send();
};
if (this instanceof Ajax) {
return this.Ajax;
} else {
return new Ajax(all);
}
};
This function works perfectly for a single request, but how can I get it to work when called so many times within a loop?
I think the problem might be related to the 2 concurrent connections limit that most web browsers implement.
It looks like the latency of your web service to respond is making your AJAX requests overlap, which in turn is exceeding the 2 concurrent connections limit.
You may want to check out these articles regarding this limitation:
The Dreaded 2 Connection Limit
The Two HTTP Connection Limit Issue
Circumventing browser connection limits for fun and profit
This limit is also suggested in the HTTP spec: section 8.14 last paragraph, which is probably the main reason why most browsers impose it.
To work around this problem, you may want to consider the option of relaunching your AJAX request ONLY after a successful response from the previous AJAX call. This will prevent the overlap from happening. Consider the following example:
function autoUpdate () {
var ajaxConnection = new Ext.data.Connection();
ajaxConnection.request({
method: 'GET',
url: '/web-service/',
success: function (response) {
// Add your logic here for a successful AJAX response.
// ...
// ...
// Relaunch the autoUpdate() function in 100ms. (Could be less or more)
setTimeout(autoUpdate, 100);
}
}
}
This example uses ExtJS, but you could very easily use just XMLHttpRequest.
Given that the limit to a single domain is 2 concurrent connections in most browsers, it doesn't confer any speed advantage launching more than 2 concurrent requests. Launch 2 requests, and dequeue and launch another each time one completes.
I'd suggest throttling your requests so you only have a few (4?) outstanding at any given time. You're probably seeing the result of multiple requests being queued and timing out before your code can handle them all. Just a gess though. We have an ajax library that has built-in throttling and queues the requests so we only have 4 outstanding at any one time and don't see any problems. We routinely q lots per page.
Your code looks like it's put together using the constructor pattern. Are you invoking it with the new operator like var foo = new Ajax(...) in your calling code? Or are you just calling it directly like var foo = Ajax(...) ?
If the latter, you're likely overwriting state on your later calls. It looks like it's designed to be called to create an object, on which the get/post methods are called. This could be your problem if you're "calling it within a loop" as you say.

Categories

Resources