I have a Problem With IE and SignalR, I'm using the it to perform a Syncing action between two databases, the Actions Completed successfully on Google Chrome / Firefox / Safari in all scenarios.
Using IE for the First time the sync performed successfully but only for one time, in the second time a pending request stack and the page stay freeze for ever.
I found a solution online which is changing the transport mode.
But page still freezing.
if (isIE()) {
$.connection.hub.start({ transport: ['serverSentEvents','foreverFrame']}).done(function () {
progressNotifier.server.DoMyLongAction();
});
}else{
$.connection.hub.start({ transport: ['serverSentEvents','longPolling'] }).done(function () {
progressNotifier.server.DoMyLongAction();
});
}
I'm Using:
SgnalR v2.1.0.0
.Net framework v4.5
jquery v1.8
is it an Issue or I'm Doing something wrong ?
Edit
my application use Jquery progress bar and i Update this progress bar using this Code:
server side:
Clients.Caller.sendMessage(msg, 5, "Accounts");
client side:
progressNotifier.client.sendMessage = function (message, value, Entity) {
pbar1.progressbar("value", nvalue);
};
it's working on Firefox so I thought it's a signalR Issue !! Now i became confused if it's working as expected then what causes this problem ?
you can try use EventSource (SSE).
I am using this:
https://github.com/remy/polyfills/blob/master/EventSource.js
but modified, for SignalR:
http://a7.org/scripts/jquery/eventsource_edited.js
I am working with it for one year, SignalR just check for window.EventSource and it works.
The solution you found online is not likely to help your issue.
I doubt your IsIE() function is correctly identifying IE. If it was, SignalR should only be attempting to establish a "foreverFrame" connection, since IE does not even support "serverSentEvents". I would not expect IE to make any "/signalr/poll" requests, because those requests are only made by the "longPolling" transport.
Also, having a "pending" poll request in the IE F12 tool's network tab is entirely expected. This is how long polling is designed to work. Basically, as soon as a message is received the client makes a new ajax request (a long poll) to retrieve new messages. If no new messages are immediately available, the server will wait (for up to 110 seconds by default in the case of SignalR, not forever) for a new message to be sent to the client before responding to the pending long poll request with the new message.
Can you clarify exactly what issue you are having other than seeing a pending poll request showing up under the network tab? It would also help if you you enabled tracing in the JS client, provided the console output, and showed all the "/signalr/..." requests in the network tab.
Related
I have a HTML5 application that needs to send a disconnect ajax request when the user changes/refreshes the page. I am currently using this code:
window.addEventListener("beforeunload", function(event) {
$.ajax({
url: api_disconnect,
data: { identifier: token },
method: "GET"
});
});
I don't need to process the response, or even ensure that the browser receives a response. My question is, can I rely on the server receiving the request?
And if not, how can I accomplish this? Currently I have the app send an "I'm alive!" request every 15 seconds (which already feels like too much). I want the server to know the second the user disconnects.
To clarify, I know that if the browser/computer crashes there's nothing I can do about that. That's what the heartbeat is for. I just mean in a normal use case, when the user closes/changes/refreshes the page.
You cannot 100% rely on the ajax call getting through. You can test many browsers and operating systems and determine which ones will usually get the ajax call sent before the page is torn down, but it is not guaranteed to do so by any specification.
The heartbeat like you are using is the most common work-around. That will also cover you for a loss in network connection or a power-down or computer sleep mode or browser crash which the beforeunload handler will not.
Another work-around I've seen discussed is to use a socket.io connection to the server. Since the socket.io connection has both a small, very efficient heartbeat and the server will see the socket get closed when the page is closed, you kind of get the best of both worlds since you will see an abnormal shut-down via the heartbeat and you will see a normal shut-down immediately via the webSocket connection getting closed.
Hi I'm new user to atmosphere, and set up a simple test that worked fine. We used long-polling, and the behavior was that my client would send the server a GET that would stay open until:
data was returned by the server
a minute elapsed
in both cases, the client would immediately send another GET for the server to hold open. Most of the time no data was sent, so every minute the GET would be "refreshed." I assumed this was the default behavior because maybe certain browsers or networks would shut off a GET that exceeded a certain time limit, so this was a way to avoid that.
Question:
Is this refresh controlled by the client or the browser? I poked around and couldn't figure out if the client was closing the connection on its own and sending a new request, or if it was the server.
The reason I ask is that the server got deployed, and now that refresh is no longer occurring. My client GET now stays open to the full 5 minute (default) timeout and then throws the timeout event, then reconnects for another 5 minutes.
Server team claims "nothing changed," ha-ha. So did I do something or what? Please let me know! Thank you!
request object:
var request = {
url: 'xyz',
transport: 'long-polling',
reconnectInterval: 5000,
maxReconnectOnClose: 20,
enableXDR: true
};
Edit: the atmosphere server was changed from 2.1.3 (working) to 2.0.7 (not working) when the deploy occurred. When changed back, the 1 minute refresh behavior re-appeared. The problem is that 2.1.3 is not compatible with the server they are using, thus the down-grade.
Question: what is this feature called, is this the heartbeat or something else? Can someone tell me what change was made that would cause this. I've looked through the release notes and nothing jumped out at me.
When you open a site with Chrome it shows a message in status bar telling "Waiting for MyHost name" plus it shows Ajax Loader circle in the caption of the tab. Now I have the following javascript function:
function listen_backend_client_requests() {
$.get('/listen?cid=backend_client_requests', // an url to nginx http push channel, http connection stays opened for a long time until the actual data starts to arrive
{},
function(r) {
alert('check');
if (r == 'report_request') {
report_request();
}
listen_backend_client_requests();
}
, 'json');
}
The "$.get(...)" operation is "long polling"(via nginx http push module). It doesn't receive data instantly but waits until the data is published to a channel. And during all this time (may take up to 15 minutes) Chrome shows 'waiting for My host name' in the lower left part of the window and also shows Ajax Loader circle. I dont want them to be shown not in Chrome but neither in any other browser and I have no idea how to do that...
P.S.
By the way, I know that google docs are using the same scheme, but some how their site causes the browser not to show the message. Any suggestions?
Have you tried setting window.status? Although I'm not sure I would recommend it, it probably would do what you want. Just be sure to reset the status when appropriate.
I've found solution to my problem in contrast to the following posts
How do I implement basic "Long Polling"?
Sending messages to server with Comet long-polling
Browers entering "busy" state on Ajax request
my problem was that I was starting the long poll ajax request before my page was actually loaded and this fact prevented browser from "waiting for" state...
Just start your long polling process after you have your page completely loaded...
I have an application which uses an open JQuery Ajax connection to do long-polling/comet handling of updates.
Sometimes the browser and the server lose this connection (server crashes, network problems, etc, etc).
I would like the client to detect that the update has crashed and inform the user to refresh the page.
It originally seemed that I had 2 options:
handle the 'error' condition in the JQuery ajax call
handle the 'complete' condition in the JQuery ajax call
On testing, however, it seems that neither of these conditions are triggered when the server aborts the query.
How can I get my client to understand that the server has gone away?
Isn't it possible to add a setInterval() function that runs every few seconds or minutes? That way you can trigger a script that checks whether the server is still up, and if not, reset the comet connection. (I don't know what you use for the long-polling exactly though, so I don't know if it's possible to reset that connection without a page reload. If not, you can still display a message to the user).
So something like this:
var check_server = setInterval(function() {
// run server-check-script...
// if (offline) { // reset }
}, 60000);
I have a web application and use ajax to call back to my webserver to fetch data.
Sometimes(at rather unpredictable moments, but it can be reproduced) IE hangs completely for 5 minutes(the window says Not Responding) and then comes back and the xmlhttprequest object responds with error 12002.
The way I can reproduce it is as follows.
Open window(B) from main window(A) using button
Window A calls synchronous ajax(PROC1) when button is clicked to open window B. PROC1 Runs file.
New window(B) has ajax code(PROC2) and calls server asynchronous. Runs fine
User closes Window B after PROC2 completed but before data is returned.
In Main Window(a) user clicks button again. PROC1 runs again but now the send() call blocks for 5 minutes.
Please help. I've been looking for 3 days.
Please note:
* I can't test it in firefox (the app is not firefox compatible)
* I have to use synchronous calls (that's the way the app is constructed and it would take too much developer effort to rewrite it)
Why does this happen and how to I fix this?
You're right Jaap, this is related to Internet Explorer's connection limit of 2. For some reason, IE doesn't release connections to AJAX requests performed in closed windows.
I have a very similar situation, only slightly simpler:
User clicks in Window A to open Window B
Window B performs an Ajax call that takes awhile
Before the Ajax call returns, user closes Window B. The connection to this call "leaks".
Repeat 1 more time until both available connections are "leaked"
Browser becomes unresponsive
One technique you can try (mentioned in the article you found) that does seem to work is to abort the XmlHttp request in the unload event of the page.
So something like:
var xhr = null;
function unloadPage() {
if( xhr !== null ) {
xhr.abort();
}
}
Another option is to use synchronous AJAX calls, which will block until the call returns, essentially locking the browser. This may or may not be acceptable given your particular situation.
// the 3rd param is whether the call is asynchronous
xhr.open( 'get', 'url', false );
Finally, as mentioned elsewhere, you can adjust the maximum number of connections IE uses in the registry. Expecting visitors to your site to do this however isn't realistic, and it won't actually solve the problem -- just delay it from happening. As a side-note, IE8 is going to allow 6 concurrent connections.
Thanks for answering Martijn.
It didn't solve my issues. I think what I'm seeing is best described on this website:
http://bytes.com/groups/javascript/643080-ajax-crashes-ie-close-window
In my situation I have an unstable connection or a slow webserver and when the connection is too slow and the browser and the webserver still have a connection then freezes.
By default Internet Explorer only allows two concurrent connections to the same website for download purposes. If you try and fire up more than this, I.E. stalls until one of the previous requests finishes at which point the next request will complete. I believe (although I could be wrong) this was put in place to prevent overloading websites with many concurrent downloads at a time. There is a registry hack to circumvent this lock.
I found these instructions kicking around the internet which alleviated my problems - I can't promise it will work for your situation, but the multi-connection limit you're facing appears related:
Click on the Start button and select Run.
On the Run line type Regedt32.exe and hit Enter. This will launch the Registry Editor
Locate the following key in the registry:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings
Click on the Internet Settings Key.
Now go to the Edit menu, point to NEW
click DWORD Value
Type MaxConnectionsPer1_0Server for the name of this DWORD Value.
Double-click on the MaxConnectionsPer1_0Server key you just created and enter the following information: Value data: 10. Base: Decimal.
When finished press OK.
Repeat steps 4 through 9. This time naming the key MaxConnectionsPerServer and assigning it the same values as indicated in Steps 8.
When finished press OK
Close the Registry Editor.
Of course, I would use these in conjunction with the abort() call previously mentioned. In tandem, they should fix the issue.
IE5 and IE6, indeed, do hang when attempting to receive data from a PHP script. The reason is that these browsers can not decide when has all of the data been received and the connection can be closed. So they wait until connection expires (thus the 5 or 10 minute hang). A way to solve this is to tell to the browser how much data it will receive. In PHP you can do that using output buffering, for example as follows:
ob_start();
echo $html_content;
header( 'Connection: close' );
header( 'Content-Length: '.ob_get_length() );
flush();
ob_end_flush();
This is a solution when one is just loading a normal web page. When one is using
AJAX GET via Microsoft.XMLHTTP object it is enough to
send the "Connection: close" header with the GET request, like
r.request.open( "GET", url, true );
r.request.setRequestHeader( "Connection", "close" );
r.request.send();
Winsock Error 12002 means the following according to msdn
ERROR_INTERNET_TIMEOUT
12002
The request has timed out.
Winsock is the underlying socket transfer object for XMLHTTP in IE so any error thats not in the HTTP error range (300,400,500 etc) is almost always a winsock error.
What wasnt clear from your question is wheter the same resource is being queried the 2nd time round. You could force a new uncached resource by appending:
'?uid=+'Math.random()
To the URL which might solve the issue.
another solution might be to attach a function to the "onbeforeunload" event on the window object to call abort() an any active XMLHTTP request just before the window B is closed.
Hope these 2 pointers solve your bug.
All these posts - Disable PDF reader.. and that stuff... will not resolve your problem...
But sure shot is - RUN WINDOWS UPDATE .. keep uptodate.. This issue gets resolved by itself..
Experience speaks ;)
HydTechie