Server-Sent Event connections are not closed despite calling `close()` - javascript

I want to programmatically close the Server-Sent-Events once a user logs out. However when the user logs back in, the browser does not execute any https-requests anymore because it has reached its limit of SSE connections.
I'm using EventSource for listening to events.
This is how I close my connections:
var eventSource;
function onChange(accountId:string, callback){
var url = "...";
eventSource = new EventSource(url);
if(eventSource){
eventSource.addEventListener("put", callback);
}
}
function close(){
this.eventSource.close()
}
When I was observing the network connections on the browser, I realized the connection still exists. The output in Timing is: Caution: request is not finished yet!, the following event-streams are stalled due to limited number of connections.
I'm not sure if EventSource is designed to behave like this, but I could not find anything regarding this issue, since many people don't have the same scenario.
Everytime I reload the page in my browser (chrome) all existing connections are closed, but I don't want to reload the page to workaround this issue.

Make sure that this.eventSource is referring to what you think it is. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this If called from an event handler it won't be the global scope.
After calling xxx.close(), set xxx to null. (I doubt that is the problem, but it might help the garbage collection, and also helps find bugs.)
As a 3rd idea of the problem, avoid giving globals the same name as built-in objects. I normally use es for my event source objects.

Related

oidc-client CheckSessionIFrame fires properly one time, then fails ever interval thereafter

This may not actually be an issue with Identity Server or the oidc-client, but I am having trouble pinning down the problem. I am running this through System.js in an Aurelia application, so it's possible the issue originates from one of these external libraries.
In CheckSessionIFrame.start(session_state), we have the following code:
this._timer = window.setInterval(() => {
this._frame.contentWindow.postMessage(this._client_id + " " + this._session_state, this._frame_origin);
}, this._interval);
The first time the interval fires, there appear to be no problems. The iFrame's contentWindow exists (as expected) and the postMessage method is called without issue. Two seconds later, when the interval fires again, this._frame.contentWindow is undefined - so my best guess is the iFrame is dying somehow. Again, this may not be an issue with oidc-client, but I'm looking for any helpful guidance on what could cause this iFrame to die (perhaps internally it could be dying on a script?) such as a missing necessary config value for oidc-client.
For oidc-client to work with silent renew, you need to have your aurelia-app on an element that is not the body, so you can place elements within the body yet outside of your aurelia-app.
This allows you to put the IFrame outside of the aurelia-app, which prevents the Aurelia bootstrapper from eating it and lets oidc-client function independently of Aurelia.
EDIT
Based on your comment, and a little memory refreshing on my part, I rephrase/clarify:
The session checker and the silent renew functions work independently of each other. You can silent renew before the session checker has started with a manual call. You can also start the session checker without doing any silent renew. They are just convenient to use together, but that's their only relationship.
I'm assuming you use the hybrid flow and have the standard session checker implementation with an RP and OP iframe, where the OP iframe is in a check_session.html page and the RP iframe is somewhere in your aurelia app. In one of my projects I have the RP iframe in the index.html, outside of the aurelia-app element so it works independently of aurelia. But I guess it doesn't necessarily have to be there.
The session checker starts when you set the src property of the RP iframe to the location of your check_session.html with the session_state, check_session_iframe and client_id after the hash.
The check_session.html page will respond to that by starting the periodic polling and post a message back to the window of your aurelia app if the state has changed.
From your aurelia app, you listen to that message and do the signinSilent() call if it indicates a changed state. And from the silent_renew.html page, you respond to that with signinSilentCallback()
All that being in place, it really doesn't matter when you start the session checker. Tuck it away in a feature somewhere and load that feature last.
The only two things you need to worry about during the startup of your application is:
Check for window.hash starting with #code and call signinRedirectCallback(code) if it does
If it does not, just call signinSilent() right away (that leaves you with the least amount of things to check)
And then after either of those have been done, do getUser() and check if it's null or if the expired property === true. If either of those is the case, do the signinRedirect(). If not, your user is authenticated and you can let the aurelia app do it's thing and start the session checker etc.
I would definitely not put the initial authentication checks on your index.html within the aurelia-app. Because if aurelia happens to finish loading before the oidc checks are done, the process will fail. You also probably want to store the user object (and UserManager) in some cache/service/other type of singleton class so you can easily interact with oidc from your aurelia application.

How do I Implement a Node.Js server-side event listener for Firebase?

I'm trying to listen for data changes in my firebase using firebase's package for Node. I'm using the on() method which is supposed to listen for changes non-stop (as opposed to once() method that only listens to the first occurrence of a specific event ) My listener.js file on the server is exactly like this:
var Firebase=require('firebase');
var Ref= new Firebase('https://mydatabase.firebaseio.com/users/');
Ref.on('child_changed',function(childsnapshot,prevchildname){
Ref.child(childsnapshot.key()).push("I hear you!");
} ) ;
But it only works the for the first occurrence and throws a fatal memory error after a second occurrence.
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
I'm very new to server side programming and don't know what to do. I must be missing something important. Should I set up special server settings with node first? or maybe make a daemon that runs a script with once() method every second or so ?
I'm pretty sure you're creating an endless loop here:
You push a value to https://mydatabase.firebaseio.com/users/
the on('child_changed' event fires in your script
your script pushes a new child under the value
so we go back to step 2 and repeat
It will happen quite rapidly too, since Firebase clients fire local events straight away.
It looks like you're trying to create a chat bot. Which means you more likely want to create sibling messages:
var Firebase=require('firebase');
var ref= new Firebase('https://mydatabase.firebaseio.com/users/');
ref.on('child_changed',function(childsnapshot,prevchildname){
ref.push("I hear you!");
}) ;
Note that it is pretty inefficient to use StackOverflow to debug code. Since you seem to be on Windows, I recommend installing Visual Studio and its node tools. They have a great debugger that allows you to step through the code. Setting a breakpoint in your callback (so in the line with ref.push), will quickly show you what is going wrong.

WebRTC: simultaneous renegotiation issue

Use Case: three peers are in video chat with other two in same room, server sends a message and all three change mode to audio,
for now, only chrome supports renegotiation, so for firefox, i just close the connection and create new peer connection, but after I check both sides are chrome and change mode,
If I am changing mode of only one peer at a time, it works smoothly.
but when, message comes from server, both peers try to renegotiate simultaneously and it didn't work, I got something like wrong state : STATE_SENTINITIATE
To handle that problem, I did a workaround where, whenever peerconnection has to renegotiate, it checks if it is the caller
if yes, it goes ahead with renegotiation.
else(if it is answerer), it would change the offered stream and signal the caller to renegotiate.
The above workaround works for few renegotiations, but for some cases, it throws error at setting local description on the answerer's side, claiming wrong state to be either STATE_INPROGRESS or STATE_SENTACCEPT.
how do I resolve this issue?
Since renegotiation is a state-machine, having both sides initiate renegotiation at the same time can collide, and you end up with invalid-state errors. This is called glare.
Your workaround is one way to deal with glare, essentially using signaling to make sure renegotiation is always initiated from the same end (typically the offerer's side).
You say you're still seeing occasional invalid-state errors even with this workaround. Since renegotiation is a round-trip between peers, there's a window of time where if you're also responding to signaling requests for new renegotiation, I suppose you could still get invalid-state errors if you try to renegotiate again too soon.
You can check the pc.signalingState attribute to know what state your peerConnection is in at any time. I would look at that when you receive incoming messages, to see if this is the problem. If it is, I would hold off on renegotiating until your connection is in the "stable" state again. You can use pc.onsignalingstatechange to react to state changes.
Another solution I've heard of (but not tried) to solve glare would be to let peers renegotiate independently, and when they do glare, let the offerer always win. e.g. the answerer would cancel any attempts it was making on receiving an incoming offer (by somehow reverting itself back to its previous stable state), whereas the offerer would ignore any incoming offers during its own attempt.
By the way, renegotiation is supported in Firefox as well now (38+) so you could try it there as well to see if you get the same problems.

Firebase persistance - onDisconnect with multiple browser windows

We're writing an app that monitors online presence. There are multiple scenarios where we need the user to have more than one browser window open. We're running into a problem where the user, after opening and running the firebase js code in a secondary browser window, will close that secondary window. This sets their presence to offline in the primary window because the onDisconnect event fires in the secondary window.
Is there a workaround for this scenario? Is this where the special /.info/connected location could be used?
The .info/connected presence data only tells a given client if they are linked up to the Firebase server, so it won't help you in this case. You could try one of these:
Reset the variable if it goes offline
If you have multiple clients monitoring the same variable (multiple windows falls into this category), it's natural to expect them to conflict about presence values. Have each one monitor the variable for changes and correct it as necessary.
var ref = firebaseRef.child(MY_ID).child('status');
ref.onDisconnect().remove();
ref.on('value', function(ss) {
if( ss.val() !== 'online' ) {
// another window went offline, so mark me still online
ss.ref().set('online');
}
});
This is fast and easy to implement, but may be annoying since my status might flicker to offline and then back online for other clients. That could, of course, be solved by putting a short delay on any change event before it triggers a UI update.
Give each connection its own path
Since the purpose of the onDisconnect is to tell you if a particular client goes offline, then we should naturally give each client its own connection:
var ref = firebaseRef.child(MY_ID).child('status').push(1);
ref.onDisconnect().remove();
With this use case, each client that connects adds a new child under status. If the path is not null, then there is at least one client connected.
It's actually fairly simple to check for online presence by just looking for a truthy value at status:
firebaseRef.child(USER_ID).child('status').on('value', function(ss) {
var isOnline = ss.val() !== null;
});
A downside of this is that you really only have a thruthy value for status (you can't set it to something like "idle" vs "online"), although this too could be worked around with just a little ingenuity.
Transactions don't work in this case
My first thought was to try a transaction, so you could increment/decrement a counter when each window is opened/closed, but there doesn't seem to be a transaction method off the onDisconnect event. So that's how I came up with the multi-path presence value.
Another simple solution (Using angularfire2, but using pure firebase is similar):
const objRef = this.af.database.list('/users/' + user_id);
const elRef = objRef.push(1);
elRef.onDisconnect().remove();
Each time a new tab is opened a new element is added to the array. The reference for "onDisconnect" is with the new element, and only it is excluded.
The user will be offline when this array is empty.

IE hang for 5 minutes when calling synchronous xmlhttprequest

I have a web application and use ajax to call back to my webserver to fetch data.
Sometimes(at rather unpredictable moments, but it can be reproduced) IE hangs completely for 5 minutes(the window says Not Responding) and then comes back and the xmlhttprequest object responds with error 12002.
The way I can reproduce it is as follows.
Open window(B) from main window(A) using button
Window A calls synchronous ajax(PROC1) when button is clicked to open window B. PROC1 Runs file.
New window(B) has ajax code(PROC2) and calls server asynchronous. Runs fine
User closes Window B after PROC2 completed but before data is returned.
In Main Window(a) user clicks button again. PROC1 runs again but now the send() call blocks for 5 minutes.
Please help. I've been looking for 3 days.
Please note:
* I can't test it in firefox (the app is not firefox compatible)
* I have to use synchronous calls (that's the way the app is constructed and it would take too much developer effort to rewrite it)
Why does this happen and how to I fix this?
You're right Jaap, this is related to Internet Explorer's connection limit of 2. For some reason, IE doesn't release connections to AJAX requests performed in closed windows.
I have a very similar situation, only slightly simpler:
User clicks in Window A to open Window B
Window B performs an Ajax call that takes awhile
Before the Ajax call returns, user closes Window B. The connection to this call "leaks".
Repeat 1 more time until both available connections are "leaked"
Browser becomes unresponsive
One technique you can try (mentioned in the article you found) that does seem to work is to abort the XmlHttp request in the unload event of the page.
So something like:
var xhr = null;
function unloadPage() {
if( xhr !== null ) {
xhr.abort();
}
}
Another option is to use synchronous AJAX calls, which will block until the call returns, essentially locking the browser. This may or may not be acceptable given your particular situation.
// the 3rd param is whether the call is asynchronous
xhr.open( 'get', 'url', false );
Finally, as mentioned elsewhere, you can adjust the maximum number of connections IE uses in the registry. Expecting visitors to your site to do this however isn't realistic, and it won't actually solve the problem -- just delay it from happening. As a side-note, IE8 is going to allow 6 concurrent connections.
Thanks for answering Martijn.
It didn't solve my issues. I think what I'm seeing is best described on this website:
http://bytes.com/groups/javascript/643080-ajax-crashes-ie-close-window
In my situation I have an unstable connection or a slow webserver and when the connection is too slow and the browser and the webserver still have a connection then freezes.
By default Internet Explorer only allows two concurrent connections to the same website for download purposes. If you try and fire up more than this, I.E. stalls until one of the previous requests finishes at which point the next request will complete. I believe (although I could be wrong) this was put in place to prevent overloading websites with many concurrent downloads at a time. There is a registry hack to circumvent this lock.
I found these instructions kicking around the internet which alleviated my problems - I can't promise it will work for your situation, but the multi-connection limit you're facing appears related:
Click on the Start button and select Run.
On the Run line type Regedt32.exe and hit Enter. This will launch the Registry Editor
Locate the following key in the registry:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings
Click on the Internet Settings Key.
Now go to the Edit menu, point to NEW
click DWORD Value
Type MaxConnectionsPer1_0Server for the name of this DWORD Value.
Double-click on the MaxConnectionsPer1_0Server key you just created and enter the following information: Value data: 10. Base: Decimal.
When finished press OK.
Repeat steps 4 through 9. This time naming the key MaxConnectionsPerServer and assigning it the same values as indicated in Steps 8.
When finished press OK
Close the Registry Editor.
Of course, I would use these in conjunction with the abort() call previously mentioned. In tandem, they should fix the issue.
IE5 and IE6, indeed, do hang when attempting to receive data from a PHP script. The reason is that these browsers can not decide when has all of the data been received and the connection can be closed. So they wait until connection expires (thus the 5 or 10 minute hang). A way to solve this is to tell to the browser how much data it will receive. In PHP you can do that using output buffering, for example as follows:
ob_start();
echo $html_content;
header( 'Connection: close' );
header( 'Content-Length: '.ob_get_length() );
flush();
ob_end_flush();
This is a solution when one is just loading a normal web page. When one is using
AJAX GET via Microsoft.XMLHTTP object it is enough to
send the "Connection: close" header with the GET request, like
r.request.open( "GET", url, true );
r.request.setRequestHeader( "Connection", "close" );
r.request.send();
Winsock Error 12002 means the following according to msdn
ERROR_INTERNET_TIMEOUT
12002
The request has timed out.
Winsock is the underlying socket transfer object for XMLHTTP in IE so any error thats not in the HTTP error range (300,400,500 etc) is almost always a winsock error.
What wasnt clear from your question is wheter the same resource is being queried the 2nd time round. You could force a new uncached resource by appending:
'?uid=+'Math.random()
To the URL which might solve the issue.
another solution might be to attach a function to the "onbeforeunload" event on the window object to call abort() an any active XMLHTTP request just before the window B is closed.
Hope these 2 pointers solve your bug.
All these posts - Disable PDF reader.. and that stuff... will not resolve your problem...
But sure shot is - RUN WINDOWS UPDATE .. keep uptodate.. This issue gets resolved by itself..
Experience speaks ;)
HydTechie

Categories

Resources