This may not actually be an issue with Identity Server or the oidc-client, but I am having trouble pinning down the problem. I am running this through System.js in an Aurelia application, so it's possible the issue originates from one of these external libraries.
In CheckSessionIFrame.start(session_state), we have the following code:
this._timer = window.setInterval(() => {
this._frame.contentWindow.postMessage(this._client_id + " " + this._session_state, this._frame_origin);
}, this._interval);
The first time the interval fires, there appear to be no problems. The iFrame's contentWindow exists (as expected) and the postMessage method is called without issue. Two seconds later, when the interval fires again, this._frame.contentWindow is undefined - so my best guess is the iFrame is dying somehow. Again, this may not be an issue with oidc-client, but I'm looking for any helpful guidance on what could cause this iFrame to die (perhaps internally it could be dying on a script?) such as a missing necessary config value for oidc-client.
For oidc-client to work with silent renew, you need to have your aurelia-app on an element that is not the body, so you can place elements within the body yet outside of your aurelia-app.
This allows you to put the IFrame outside of the aurelia-app, which prevents the Aurelia bootstrapper from eating it and lets oidc-client function independently of Aurelia.
EDIT
Based on your comment, and a little memory refreshing on my part, I rephrase/clarify:
The session checker and the silent renew functions work independently of each other. You can silent renew before the session checker has started with a manual call. You can also start the session checker without doing any silent renew. They are just convenient to use together, but that's their only relationship.
I'm assuming you use the hybrid flow and have the standard session checker implementation with an RP and OP iframe, where the OP iframe is in a check_session.html page and the RP iframe is somewhere in your aurelia app. In one of my projects I have the RP iframe in the index.html, outside of the aurelia-app element so it works independently of aurelia. But I guess it doesn't necessarily have to be there.
The session checker starts when you set the src property of the RP iframe to the location of your check_session.html with the session_state, check_session_iframe and client_id after the hash.
The check_session.html page will respond to that by starting the periodic polling and post a message back to the window of your aurelia app if the state has changed.
From your aurelia app, you listen to that message and do the signinSilent() call if it indicates a changed state. And from the silent_renew.html page, you respond to that with signinSilentCallback()
All that being in place, it really doesn't matter when you start the session checker. Tuck it away in a feature somewhere and load that feature last.
The only two things you need to worry about during the startup of your application is:
Check for window.hash starting with #code and call signinRedirectCallback(code) if it does
If it does not, just call signinSilent() right away (that leaves you with the least amount of things to check)
And then after either of those have been done, do getUser() and check if it's null or if the expired property === true. If either of those is the case, do the signinRedirect(). If not, your user is authenticated and you can let the aurelia app do it's thing and start the session checker etc.
I would definitely not put the initial authentication checks on your index.html within the aurelia-app. Because if aurelia happens to finish loading before the oidc checks are done, the process will fail. You also probably want to store the user object (and UserManager) in some cache/service/other type of singleton class so you can easily interact with oidc from your aurelia application.
Related
For some unknown reasons ,my browser open test pages of my remote server very slowly. So I am thinking if I can reconnect to the browser after quitting the script but don't execute webdriver.quit() this will leave the browser opened. It is probably kind of HOOK or webdriver handle.
I have looked up the selenium API doc but didn't find any function.
I'm using Chrome 62,x64,windows 7,selenium 3.8.0.
I'll be very appreciated whether the question can be solved or not.
No, you can't reconnect to the previous Web Browsing Session after you quit the script. Even if you are able to extract the Session ID, Cookies and other session attributes from the previous Browsing Context still you won't be able to pass those attributes as a HOOK to the WebDriver.
A cleaner way would be to call webdriver.quit() and then span a new Browsing Context.
Deep Dive
There had been a lot of discussions and attempts around to reconnect WebDriver to an existing running Browsing Context. In the discussion Allow webdriver to attach to a running browser Simon Stewart [Creator WebDriver] clearly mentioned:
Reconnecting to an existing Browsing Context is a browser specific feature, hence can't be implemented in a generic way.
With internet-explorer, it's possible to iterate over the open windows in the OS and find the right IE process to attach to.
firefox and google-chrome needs to be started in a specific mode and configuration, which effectively means that just
attaching to a running instance isn't technically possible.
tl; dr
webdriver.firefox.useExisting not implemented
Yes, that's actually quite easy to do.
A selenium <-> webdriver session is represented by a connection url and session_id, you just reconnect to an existing one.
Disclaimer - the approach is using selenium internal properties ("private", in a way), which may change in new releases; you'd better not use it for production code; it's better not to be used against remote SE (yours hub, or provider like BrowserStack/Sauce Labs), because of a caveat/resource drainage explained at the end.
When a webdriver instance is initiated, you need to get the before-mentioned properties; sample:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
# now Google is opened, the browser is fully functional; print the two properties
# command_executor._url (it's "private", not for a direct usage), and session_id
print(f'driver.command_executor._url: {driver.command_executor._url}')
print(f'driver.session_id: {driver.session_id}')
With those two properties now known, another instance can connect; the "trick" is to initiate a Remote driver, and provide the _url above - thus it will connect to that running selenium process:
driver2 = webdriver.Remote(command_executor=the_known_url)
# when the started selenium is a local one, the url is in the form 'http://127.0.0.1:62526'
When that is ran, you'll see a new browser window being opened.
That's because upon initiating the driver, the selenium library automatically starts a new session for it - and now you have 1 webdriver process with 2 sessions (browsers instances).
If you navigate to an url, you'll see it is executed on that new browser instance, not the one that's left from the previous start - which is not the desired behavior.
At this point, two things need to be done - a) close the current SE session ("the new one"), and b) switch this instance to the previous session:
if driver2.session_id != the_known_session_id: # this is pretty much guaranteed to be the case
driver2.close() # this closes the session's window - it is currently the only one, thus the session itself will be auto-killed, yet:
driver2.quit() # for remote connections (like ours), this deletes the session, but does not stop the SE server
# take the session that's already running
driver2.session_id = the_known_session_id
# do something with the now hijacked session:
driver.get('https://www.bing.com/')
And, that is it - you're now connected to the previous/already existing session, with all its properties (cookies, LocalStorage, etc).
By the way, you do not have to provide desired_capabilities when initiating the new remote driver - those are stored and inherited from the existing session you took over.
Caveat - having a SE process running can lead to some resource drainage in the system.
Whenever one is started and then not closed - like in the first piece of the code - it will stay there until you manually kill it. By this I mean - in Windows for example - you'll see a "chromedriver.exe" process, that you have to terminate manually once you are done with it. It cannot be closed by a driver that has connected to it as to a remote selenium process.
The reason - whenever you initiate a local browser instance, and then call its quit() method, it has 2 parts in it - the first one is to delete the session from the Selenium instance (what's done in the second code piece up there), and the other is to stop the local service (the chrome/geckodriver) - which generally works ok.
The thing is, for Remote sessions the second piece is missing - your local machine cannot control a remote process, that's the work of that remote's hub. So that 2nd part is literally a pass python statement - a no-op.
If you start too many selenium services on a remote hub, and don't have a control over it - that'll lead to resource drainage from that server. Cloud providers like BrowserStack take measures against this - they are closing services with no activity for the last 60s, etc, yet - this is something you don't want to do.
And as for local SE services - just don't forget to occasionally clean up the OS from orphaned selenium drivers you forgot about :)
OK after mixing various solutions shared on here and tweaking I have this working now as below. Script will use previously left open chrome window if present - the remote connection is perfectly able to kill the browser if needed and code functions just fine.
I would love a way to automate the getting of session_id and url for previous active session without having to write them out to a file during hte previous session for pick up...
This is my first post on here so apologies for breaking any norms
#Set manually - read/write from a file for automation
session_id = "e0137cd71ab49b111f0151c756625d31"
executor_url = "http://localhost:50491"
def attach_to_session(executor_url, session_id):
original_execute = WebDriver.execute
def new_command_execute(self, command, params=None):
if command == "newSession":
# Mock the response
return {'success': 0, 'value': None, 'sessionId': session_id}
else:
return original_execute(self, command, params)
# Patch the function before creating the driver object
WebDriver.execute = new_command_execute
driver = webdriver.Remote(command_executor=executor_url, desired_capabilities={})
driver.session_id = session_id
# Replace the patched function with original function
WebDriver.execute = original_execute
return driver
remote_session = 0
#Try to connect to the last opened session - if failing open new window
try:
driver = attach_to_session(executor_url,session_id)
driver.current_url
print(" Driver has an active window we have connected to it and running here now : ")
print(" Chrome session ID ",session_id)
print(" executor_url",executor_url)
except:
print("No Driver window open - make a new one")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=myoptions)
session_id = driver.session_id
executor_url = driver.command_executor._url
Without getting into why do you think that leaving an open browser windows will solve the problem of being slow, you don't really need a handle to do that. Just keep running the tests without closing the session or, in other words, without calling driver.quit() as you have mentioned yourself. The question here though framework that comes with its own runner? Like Cucumber?
In any case, you must have some "setup" and "cleanup" code. So what you need to do is to ensure during the "cleanup" phase that the browser is back to its initial state. That means:
Blank page is displayed
Cookies are erased for the session
Situation:
On mywebsite.com/game, I registered a service-worker with
navigator.serviceWorker.register('/service-worker.js', {scope: "/"});
On my server, '/service-worker.js' has a maxAge of 1d.
Problem:
service-worker.js has a major bug. It always displays an empty page and can't fetch anything. service-worker.js must be changed.
The problem is whenever a user goes to mywebsite.com/game, it displays the empty page and does nothing more. I am unable to make the client fetch the new service-worker.js.
How can I make the client fetch the new service-worker.js?
What you're describing—a check for updates to /service-worker.js—happens by default, automatically, under the circumstances laid out in this article:
An update is triggered if any of the following happens:
A navigation to an in-scope page.
A functional events such as push and sync, unless there's been an update check within the previous 24 hours.
Calling .register() only if the service worker URL has changed. However, you should avoid changing the worker URL.
All modern web browsers will ignore any Cache-Control headers you set on /service-worker.js by default and go directly against the web server to obtain the latest copy.
This Stack Overflow answer has some best practices for what the revised service-worker.js file should contain if you want it to behave like a "kill switch."
Just add ?v=1 to your script like this.
navigator.serviceWorker.register('/service-worker.js?v=1', {scope: "/"});
And increment the number of script version when you make changes of service worker's script
I am trying to automate couple of pages using selenium web driver and node js . I was able to login , but after login I want to use same session initiated by web driver so that I can do automated testing on session protected page. This is my attempt
async function login(){
Let d = await new Builder()
.forBrowser('chrome')
.build();
await d.get('https://demo.textdomain.com/')
await d.findElement(By.id('username')).sendKeys('admin ')
await d.findElement(By.id('password')).sendKeys('admin');
await d.findElement(By.css('button[type="submit"]')).click();
d.getPageSource().then(function(content) {
if(content.indexOf('Welcome text') !==-1 ) {
console.log('Test passed');
console.log('landing page');
d.get('https://demo.textdomain.com/landingpage') //this is still going to login page as i cannot use the previous session
} else {
console.log('Test failed');
return false;
}
//driver.quit();
});
}
login();
Am I accidentally discarding the browser after login.
From a similar question on SQA StackExchange, you can store and restore the current session's cookies:
Using Javascript:
// Storing cookies:
driver.manage().getCookies().then(function (cookies) {
allCookies = cookies;
});
// Restoring cookies:
for (var key in allCookies) {
driver.manage().addCookie(key, allCookies[key]);
}
You might just be dealing with timing issues. Selenium moves very fast. Way faster than you can interact as a user. So it often acts in what seems like unpredictable ways. But that's only because Selenium is acting much faster than you would as a user. In order to work around this, you should make good use of Selenium's built-in driver.wait. For example:
const button = driver.wait(
until.elementLocated(By.id('my-button')),
20000
);
button.click();
The above waits until the button with id my-button is present in the DOM, and then clicks it. It will wait for a maximum of 20000 milliseconds, but will finish as soon as the button becomes available.
So in your case, if there is something that becomes available after the user is successfully logged in, you could wait on that element before going to the new page in your code.
As an aside, I'm also not so sure why you are using getPageSource()? That seems like a very heavy-handed way to get what you are looking for. Isn't that content inside an element you could get the contents of?
I wrote an article about How to write reliable browser tests using Selenium and Node.js which might help you understand in more detail the code example above from the article, along with other techniques you can use to wait reliably for a variety of conditions in the browser.
I believe your problem is not properly waiting for the login to complete.
Selenium doesn't wait for asynchronous actions to be done, it moves to the next line, so when you ask for the page source, there is a good chance the login action didn't complete on the server and the result is not what you expect it to be.
you have to explicitly tell Selenium to wait, so you need to add some code between the login and the code that checks if the user is login, for the sake of this assumption, add a 10 seconds timeout.
if this works for you, you wouldn't want to just waste time, so you need to wait for certain elements on the page to change because of the login, for example, you need to wait for the presence (or visibility if it is already in the DOM) of the user photo in the header.
also, I'm not sure how the "getPageSource" function behaves, it can use the existing page, or it can ask for a fresh copy.
I would advise you to use other ways to test if the user is logged in, by inspecting the DOM.
I suggest to re-use the session-cookie after first login in other web-driver instances.
First store the cookie:
var cookieValue = firstWebDriver.Manage().Cookies.GetCookieNamed(name:"cookie_name");
Then you can pass it by to any WebDriver instance, set it and drive the web-app as it would be the same user with different browser instances:
anotherWebDriver.Manage().Cookies.AddCookie(new Cookie(name:"cookie_name", value:cookieValue));
If you want to use the same browser instance, you have to synchronize them, because WebDriver invocations are in general not thread-safe and would probalby often lead to exceptions (e.g. stale because an element was changed or notfound, because one web-driver navigated to a different page).
Then I suggest to just use the window handle for the next instance, without caring about the session. The first one opens and the last one closes the session (count the referenced handles) and be sure only one driver uses this handle at a time. You can also create new browser windows and this will keep the session and give you a new handle:
var handle = firstWebDriver.CurrentWindowHandle;
otherWebDriver.SwitchTo().Window(handle);
I wrote the code in C# but should be easily adaptable to JavaScript.
I want to programmatically close the Server-Sent-Events once a user logs out. However when the user logs back in, the browser does not execute any https-requests anymore because it has reached its limit of SSE connections.
I'm using EventSource for listening to events.
This is how I close my connections:
var eventSource;
function onChange(accountId:string, callback){
var url = "...";
eventSource = new EventSource(url);
if(eventSource){
eventSource.addEventListener("put", callback);
}
}
function close(){
this.eventSource.close()
}
When I was observing the network connections on the browser, I realized the connection still exists. The output in Timing is: Caution: request is not finished yet!, the following event-streams are stalled due to limited number of connections.
I'm not sure if EventSource is designed to behave like this, but I could not find anything regarding this issue, since many people don't have the same scenario.
Everytime I reload the page in my browser (chrome) all existing connections are closed, but I don't want to reload the page to workaround this issue.
Make sure that this.eventSource is referring to what you think it is. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this If called from an event handler it won't be the global scope.
After calling xxx.close(), set xxx to null. (I doubt that is the problem, but it might help the garbage collection, and also helps find bugs.)
As a 3rd idea of the problem, avoid giving globals the same name as built-in objects. I normally use es for my event source objects.
We're writing an app that monitors online presence. There are multiple scenarios where we need the user to have more than one browser window open. We're running into a problem where the user, after opening and running the firebase js code in a secondary browser window, will close that secondary window. This sets their presence to offline in the primary window because the onDisconnect event fires in the secondary window.
Is there a workaround for this scenario? Is this where the special /.info/connected location could be used?
The .info/connected presence data only tells a given client if they are linked up to the Firebase server, so it won't help you in this case. You could try one of these:
Reset the variable if it goes offline
If you have multiple clients monitoring the same variable (multiple windows falls into this category), it's natural to expect them to conflict about presence values. Have each one monitor the variable for changes and correct it as necessary.
var ref = firebaseRef.child(MY_ID).child('status');
ref.onDisconnect().remove();
ref.on('value', function(ss) {
if( ss.val() !== 'online' ) {
// another window went offline, so mark me still online
ss.ref().set('online');
}
});
This is fast and easy to implement, but may be annoying since my status might flicker to offline and then back online for other clients. That could, of course, be solved by putting a short delay on any change event before it triggers a UI update.
Give each connection its own path
Since the purpose of the onDisconnect is to tell you if a particular client goes offline, then we should naturally give each client its own connection:
var ref = firebaseRef.child(MY_ID).child('status').push(1);
ref.onDisconnect().remove();
With this use case, each client that connects adds a new child under status. If the path is not null, then there is at least one client connected.
It's actually fairly simple to check for online presence by just looking for a truthy value at status:
firebaseRef.child(USER_ID).child('status').on('value', function(ss) {
var isOnline = ss.val() !== null;
});
A downside of this is that you really only have a thruthy value for status (you can't set it to something like "idle" vs "online"), although this too could be worked around with just a little ingenuity.
Transactions don't work in this case
My first thought was to try a transaction, so you could increment/decrement a counter when each window is opened/closed, but there doesn't seem to be a transaction method off the onDisconnect event. So that's how I came up with the multi-path presence value.
Another simple solution (Using angularfire2, but using pure firebase is similar):
const objRef = this.af.database.list('/users/' + user_id);
const elRef = objRef.push(1);
elRef.onDisconnect().remove();
Each time a new tab is opened a new element is added to the array. The reference for "onDisconnect" is with the new element, and only it is excluded.
The user will be offline when this array is empty.