What I'm doing
I've been experimenting with Selenium and making a simple program to make my Selenium testing life easier. Part of this is testing webelements and figuring out what methods (clicking submitting ect) make them reload the page, remain static, or become stale without reloading the page. In this question I'm particularly interested in the third case as the first two are already implemented by me.
The problem I'm having
The problem I have is finding a Webelement that goes stale and doesn't cause a page reload. I can't think of a good way to search for one, I don't have the HTML and javascript skills to make one (yet anyways) and I can't verify my code works unless I actually test it.
What I've done/tried
The first thing I thought to look for was a popup but those aren't actually part of the webpage and they're also quite unreliable. I want something thats going to behave consistently because otherwise the test won't work. I think dynamic Webelements, those that change their locators when acted upon will suit my needs but I have no good way of finding them. Any google results for "Self deleting webelement exmaple" or "Webelement goes stale doesn't cause page reload example" or similar, will only give me questions on stackoverflow like this one rather than what I want - concrete examples. The code I'm running simply waits for a staleReferenceException and for an onload event in javascript. If the staleReferenceException occurs but the onload event does not, then I know I've found a self-deleting / dynamic webelement (at least thats what I think is the proper way to detect this). Here is the code I'm running:
try {
//wait until the element goes stale
wait.until(ExpectedConditions.stalenessOf(webElement));
//init the async javascript callback script
String script = "var callback = arguments[arguments.length - 1];" +
"var classToCall = 'SeleniumTest.isPageReloaded';" +
"window.addEventListener('onload'," + "callback(classToCall));";
//execute the script and wait till it returns (unless timeout exceeded)
JavascriptExecutor js = (JavascriptExecutor) driver;
//execute the script and return the java classname to call
//if/when the callback function returns normally
String classToCall = (String) js.executeAsyncScript(script);
clazz = Class.forName(classToCall);
callbackMethod = clazz.getMethod("JavascriptWorking");
callbackMethod.invoke(null,null);
//page reloaded
threadcase = 1;
}
//waiting until webElement becomes stale exceeded timeoutSeconds
catch (TimeoutException e) {
//page was static
threadcase = 2;
}
//waiting until webElement Reloaded the page exceeded timeoutSeconds
catch (ScriptTimeoutException e) {
//the webElement became stale BUT didn't cause a page reload.
threadcase = 3;
As you can notice above there is an int variable named threadcase in this code. The three 'cases' starting from 1 (0 was the starting value which represented a program flow error) represent the three (non-error) possible results of this test:
the page reloads
the page remains static, webelement doesn't change
the page remains static, webelement changes
And I need a good example with which to test the third case.
Solutions I've considered
I've done some basic research into removing webelements in javascript but I A: don't even know if I can act on the page in Selenium like that and B: I'd rather get a test case that just uses the Webpage as is since introducing my edits makes the validity of my testcase reliant on more of my code (which is bad!). So what I need is a good way of finding a webelement that matches my criteria without having to scour the internet with the f12 window open hoping to find that one button that does what I need.
Edit 1
I just tried doing this test more manually, it was suggested in an answer that I manually delete a webelement at the right time and then test my program that way. What I tested was the Google homepage. I tried using the google apps button because when clicked it doesn't cause the whole page to reload. So my thinking was, I'll click it, halt program execution, manually delete it, run the rest of my code, and since no onload events will occur, my program will pass the test. To my suprise thats not what happened.
The exact code I ran is the below. I had my debug stop on the first line:
1 Method callbackMethod = null;
2 try {
3 //wait until the element goes stale
4 wait.until(ExpectedConditions.stalenessOf(webElement));
5 //init the async javascript callback script
6 String script = "var callback = arguments[arguments.length - 1];" +
7 "var classToCall = 'SeleniumTest.isPageReloaded';" +
8 "window.addEventListener('onload', callback(classToCall));";
9 //execute the script and wait till it returns (unless timeout
10 //exceeded)
11 JavascriptExecutor js = (JavascriptExecutor) driver;
12 //execute the script and return the java classname to call if/when
13 //the callback function returns normally
14 String classToCall = (String) js.executeAsyncScript(script);
15 clazz = Class.forName(classToCall);
16 callbackMethod = clazz.getMethod("JavascriptWorking");
17 callbackMethod.invoke(null,null);
18 //page reloaded
19 threadcase = 1;
20 }
21 //waiting until webElement becomes stale exceeded timeoutSeconds
22 catch (TimeoutException e) {
23 //page was static
24 threadcase = 2;
25 }
26 //waiting until webElement Reloaded the page exceeded
27 //timeoutSeconds
28 catch (ScriptTimeoutException e) {
29 //the webElement became stale BUT didn't cause a page reload.
30 threadcase = 3;
31 //trying to get the class from javascript callback failed.
32 }
whats supposed to happen is that a Stale webelement causes the program to stop waiting on line 4, the program progresses, initializes the Javascript callback in lines 6-11 and then on line 14 the call to executeAsyncScript is SUPPOSED to wait untill an 'onload' event which should only occur if the page reloads. Right now its not doing that or I'm blind. I must be confusing the program flow because I'm 99% certain that there are no page reloads happening when I manipulate the DOM to delete the webelement I'm clicking on.
This is the URL I'm trying:
https://www.google.com/webhp?gws_rd=ssl
Simple google homepage, the button I'm deleting is the google apps button (the black 9-grid in the top right)
some info on that element:
class="gb_8 gb_9c gb_R gb_g"
id="gbwa"
Its the general container element for the button itself and the dropdown it creates. I'm deleting this when my program hits the STOP on line 1. Then I go through my program in the debugger. Note (you may have to click inspect element on the button more than once to focus in on it). I'm going to try deleting lower level elements rather than the whole container and see if that changes anything but still this behavior baffles me. The goal here is to get the program flow to threadcase 3 because thats the one we are testing for. There should be no page reloads BUT the webelement should become stale after I manually delete it. I don't have any clue why the javascript callback is running when I can't see a page reload. Let me know if you need more info on what exactly I'm deleting on the google homepage and I'll try sending a picture (with optional freehand circles of course).
I would think that you could debug through a test, place a breakpoint at a suitable point, then use the browsers dev tools to manually update the HTML.
Obviously, if you want this to be a repeatable process it is not an option, but if you are just investigating, then a manual intervention could be suitable
Related
I have a console program in C# with Selenium controlling a Chrome Browser Instance and I want to get all Links from a page.
But after the Page has loaded in Selenium the PageSource from Selenium ist different to the HTML of the Website I have navigated to. The Content of the Page is asynchronously loaded by JavaScript and the HTML is changed.
Even if I load the HTML of the Website like the following the HTML is still different to the one inside the Selenium controlled Browserwindow:
var html = ((IJavaScriptExecutor)driver).ExecuteScript("return document.getElementsByTagName('html')[0].outerHTML").ToString();
But why is the PageSource or the HTML returned by my JS still the same as it was when Selenium loaded the page?
EDIT:
As #BinaryBob has pointed out I have now implemented a wait-function to wait for a desired element to change a specific attribute value. The Code looks like this:
private static void AttributeIsNotEmpty(IWebDriver driver, By locator, string attribute, int secondsToWait = 60)
{
new WebDriverWait(driver, new TimeSpan(0, 0, secondsToWait)).Until(d => IsAttributeEmpty(d, locator, attribute));
}
private static bool IsAttributeEmpty(IWebDriver driver, By locator, string attribute)
{
Console.WriteLine("Output: " + driver.FindElement(locator).GetAttribute(attribute));
return !string.IsNullOrEmpty(driver.FindElement(locator).GetAttribute(attribute));
}
And the function call looks like this:
AttributeIsNotEmpty(driver, By.XPath("/html/body/div[2]/c-wiz/div[4]/div[1]/div/div/div/div/div[1]/div[1]/div[1]/a[1]"), "href");
But the condition is never met and the timeout is thrown. But inside the Chrome Browser (which is controlled by Selenium) the condition is met and the element has a filled href-Attribute.
I'm taking a stab at this. Are you calling wait.Until(ExpectedConditions...) somewhere in your code? If not, that might be the issue. Just because a FindElement method has returned does not mean the page has finished rendering.
For a quick example, this code comes from the Selenium docs site. Take note of the creation of a WebDriverWait object (line 1), and the use of it in the firstResult assignment (line 4)
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(10));
driver.Navigate().GoToUrl("https://www.google.com/ncr");
driver.FindElement(By.Name("q")).SendKeys("cheese" + Keys.Enter);
IWebElement firstResult = wait.Until(ExpectedConditions.ElementExists(By.CssSelector("h3>div")));
Console.WriteLine(firstResult.GetAttribute("textContent"));
If this is indeed the problem, you may need to read up on the various ways to use ExpectedConditions. I'd start here: Selenium Documentation: WebDriver Waits
I have over hundred test scripts in Selenium C# within Visual Studio 2017. These scripts are distributed over numerous .class files to allocate names for the different functionalities/web pages of the site. BTW this is a telemetry software so it uses signalR functionality too....just giving you some background here. Now my problem is whenever I run a script lets say a test method in A.class which represents one web page, before it loads it will go to another page(ALWAYS the same page from B.class). It doesn't matter if I run test scripts(test methods) from C.class, D.class, it will always get interrupted by the landing page from within the B.class. This is weird behavior and funny enough it has only started happening since I have been putting in lines of javascript. So please can someone suggest a solution?
To be honest I haven't tried anything apart from doing research on the net and the closest thing I got was an article, pls see link
https://blog.codeship.com/get-selenium-to-wait-for-page-load/
So I am guessing I need some code to wait for page to load so that it disturbs the interrupted landing web page?
Please see one of my scripts and pls critique all you want:
[TestMethod]
[TestCategory("OrgMgt")]
public void EditOrgDetails()
{
System.Threading.Thread.Sleep(3000);
IJavaScriptExecutor executor = (IJavaScriptExecutor)driver;
executor.ExecuteScript("arguments[0].click()", userSetRepo.userIcon);
System.Threading.Thread.Sleep(3000);
objCommon.SendKeysAndClickTab(_regRep.organizationOption, "M-Powered Automation", driver);
ChooseOrgMgt();
System.Threading.Thread.Sleep(5000);
driver.FindElement(By.XPath("//span[#class='Select-arrow-zone']")).Click();
System.Threading.Thread.Sleep(3000);
IWebElement dd = driver.FindElement(By.Id("vt-org-details-default-zoom"));
dd.Click();
_regRep.ClearAndSetText(dd, "17");
_regRep.ClearAndSetText(_orgRep.supportPhoneNo, "6666666666");
_regRep.ClearAndSetText(_regRep.Lattitude, "567.876768");
_regRep.ClearAndSetText(_regRep.Longitude, "3434.56767");
_regRep.DistanceInput.SendKeys("Kilometers");
_regRep.DistanceInput.SendKeys(Keys.Tab);
_regRep.VehicleType.SendKeys("Car");
_regRep.VehicleType.SendKeys(Keys.Tab);
_regRep.TimeZone.SendKeys("Coordinated Universal Time-11");
_regRep.TimeZone.SendKeys(Keys.Tab);
_regRep.SearchAddress.SendKeys("room");
_regRep.SearchAddressOption.Click();
objCommon.ScrollInToView(_regRep.btnUpdateOrganization);
System.Threading.Thread.Sleep(5000);
Actions builder = new Actions(driver);
builder.MoveToElement(_regRep.btnUpdateOrganization).Click().Build().Perform();
System.Threading.Thread.Sleep(2000);
bool isDisplayed = false;
try
{
isDisplayed = _regRep.OrganizationUpdatedMsg.Displayed;
}
catch
{
isDisplayed = false;
}
if (!isDisplayed)
{
Console.WriteLine("Success Message is NOT displayed. Please check if record is added");
Assert.IsTrue(isDisplayed);
}
else
{
Console.WriteLine("Success Message is displayed.Record is successfuly added");
}
}
I need it to not be interrupted and go straight to the page I want.
I'm using the PHP + Ajax version of this template: http://192.241.236.31/themes/preview/smartadmin/1.5/ajaxversion/, Codeception WebDriver (Selenium) to perform my tests. Most of them are working fine, but I have some random, yes, random!, failing tests, so I always get back to them and try to harden, hoping it will not randomly fail sometime in the future. One of the failing reasons are usually due to wrong clicks on the interface. I tell Codeception to click the #element-id and when it fails, I see that it actually had a different page showing in the output png, but the link activated showing it tried to click the correct link. Sometimes I just have to load a different page before clicking that particular link and wait 10 seconds for it to render, and it works, silly huh? Yeah, but sometimes this doesn't work either.
This is how I used to test things:
$I->click('#element-id');
And I have to use the element id, because it's a multi-language app, all (mostly) strings come from a strings file and they might change at some point and break tests.
After too many random tests failing, I decided to make the frontend itself responsible for clicking things during tests, it's a long and silly circunventing shot, but it should be work, so I created a functionalHelper:
public function clickElement($element)
{
$I = $this->getDriver();
$I->executeJS("clickElement('{$element}');");
}
And two Javascript functions:
function clickElement(element)
{
element = jQuery(element);
if(typeof element !== undefined)
{
fakeClick(element[0]);
}
return true;
}
function fakeClick(object)
{
if (object.click)
{
object.click();
}
else if(document.createEvent)
{
var evt = document.createEvent("MouseEvents");
evt.initMouseEvent("click", true, true, window,
0, 0, 0, 0, 0, false, false, false, false, 0, null);
var allowDefault = object.dispatchEvent(evt);
}
}
I'm using jQuery in the first one because it's already available in the template and it's easier to select non-id'ed things.
This is working fine, and you can test it yourself by
1) Opening the page:
http://192.241.236.31/themes/preview/smartadmin/1.5/ajaxversion/
2) Loading the script in your browser console by executing
loadScript('https://www.dropbox.com/s/kuh22fjjsa8zq0i/app.js?dl=1');
3) And clicking things:
Calendar
clickElement($x('//*[#id="left-panel"]/nav/ul/li[7]/a/span'));
Widgets
clickElement($x('//*[#id="left-panel"]/nav/ul/li[8]/a/span'));
Because the template is using the hash character to control the ajax part of the page and not reload everything, I had to (programatically) add the referer-url to all my href links, so I could, in my app, redirect back to the referer, as the hash part uf the accessed url is not sent to the server. This is a link example:
<a href="/users?referer-href-url=/dashboard" title="Users" id="users-navigation">
<span class="menu-item-parent">Users</span>
</a>
Wich basically means the user was looking at the dashboard when she clicked the users link. If something wrong happens, the app will redirect the user back to the dashboard, with an error message.
The problem is that, somehow, during tests, the current url, when tested using:
$I->seeCurrentUrlEquals('/#/clients?referer-href-url=/clients');
Becomes
/#/clients
Instead of
/#/clients?referer-href-url=/clients
This happens sometimes, because some other it also works. If I browse the site manually it works in 100% of the time and I always see the correct URL in the address bar. And if I manually execute clickElement() it also works fine. The problem only heppens my my suite is running.
Here's an example of it passing (green):
And the exact same test randomly failing (red):
This is the code related to the failing test:
$I->clickElement('#clients-navigation');
$I->waitForText('Jane Doe', 10);
$I->seeCurrentUrlEquals('/#/clients?referer-href-url=/clients');
I usually wait for something after a click, so the page can have time to render.
You can also see that there are a lot of "I click element", using those PHP and Javascript functions without a problem.
I'm using:
Ubuntu 14.04
PHP 5.5.9
Codeception 2.0.7
Selenium 2.44.0
Firefox 33.0
So, what could be the problem here?
Is this the correct way to click an element in Javascript?
In the process I also experience tests which does not fail when ran singly and fails when in batch, but this might be a different question, right?
I am trying to start 3 applications from a browser by use of custom protocol names associated with these applications. This might look familiar to other threads started on stackoverflow, I believe that they do not help in resolving this issue so please dont close this thread just yet, it needs a different approach than those suggested in other threads.
example:
ts3server://a.b.c?property1=value1&property2=value2
...
...
to start these applications I would do
location.href = ts3server://a.b.c?property1=value1&property2=value2
location.href = ...
location.href = ...
which would work in FF but not in Chrome
I figured that it might by optimizing the number of writes when there will be effectively only the last change present.
So i did this:
function a ()
{
var apps = ['ts3server://...', 'anotherapp://...', '...'];
b(apps);
}
function b (apps)
{
if (apps.length == 0) return;
location.href = apps[0]; alert(apps[0]);
setTimeout(function (rest) {return function () {b(rest);};} (apps.slice(1)), 1);
}
But it didn't solve my problem (actually only the first location.href assignment is taken into account and even though the other calls happen long enough after the first one (thanks to changing the timeout delay to lets say 10000) the applications do not get started (the alerts are displayed).
If I try accessing each of the URIs separately the apps get started (first I call location.href = uri1 by clicking on one button, then I call location.href = uri2 by clicking again on another button).
Replacing:
location.href = ...
with:
var form = document.createElement('form');
form.action = ...
document.body.appendChild(form);
form.submit();
does not help either, nor does:
var frame = document.createElement('iframe');
frame.src = ...
document.body.appendChild(frame);
Is it possible to do what I am trying to do? How would it be done?
EDIT:
a reworded summary
i want to start MULTIPLE applications after one click on a link or a button like element. I want to achieve that with starting applications associated to custom protocols ... i would hold a list of links (in each link there is one protocol used) and i would try to do "location.src = link" for all items of the list. Which when used with 'for' does optimize to assigning only once (the last value) so i make the function something like recursive function with delay (which eliminates the optimization and really forces 3 distinct calls of location.src = list[head] when the list gets sliced before each call so that all the links are taken into account and they are assigned to the location.src. This all works just fine in Mozilla Firefox, but in google, after the first assignment the rest of the assignments lose effect (they are probably performed but dont trigger the associated application launch))
Are you having trouble looping through the elements? if so try the for..in statement here
Or are you having trouble navigating? if so try window.location.assign(new_location);
[edit]
You can also use window.location = "...";
[edit]
Ok so I did some work, and here is what I got. in the example I open a random ace of spades link. which is a custom protocol. click here and then click on the "click me". The comments show where the JSFiddle debugger found errors.
I am trying to fix the performance problem with Dive Into Python 3 on IE8. Visit this page in IE8 and, after a few moments, you will see the following popup:
alt text http://dl.getdropbox.com/u/87045/permalinks/dip3-ie8-perf.png
I traced down the culprit down to this line in j/dip3.js
... find("tr:nth-child(" + (i+1) + ") td:nth-child(2)");
If I disable it (and return from the function immediately), the "Stop executing this script?" dialog does not appear as the page now loads fairly fast.
I am no Javascript/jquery expert, so I ask you fellow developers as to why this query is making IE slow. Is there a fix for it?
Edit: you can download the entire webpage (980K) for local viewing/editing.
This seems to need a bit of rewriting.
nth-child is a slow operation. You should implement the current functionality by generating classes or ids that would be common for the TDs in table and elements from the refs collection (dip3.js line 183). and then:
refs.each(function(i) {
var a = $(this);
var li = a.parents("pre").next("table").find("td."+a.attr('class'));
li.add(a).hover(function() { a.css(hip); li.css(hip); },
function() { a.css(unhip); li.css(unhip); });
});
This popup message is misleading - it doesn't actually mean that IE is running slowly, but that the number of executed script statements has exceeded a certain threshold. Even if the script executes very quickly, you'll still see this message if you go over the limit. The only way to get rid of it is to reduce the number of statements executed or edit the registry.
http://support.microsoft.com/kb/175500
I find Microsoft's implementation of this very annoying. It makes assumptions about the speed of your computer.