WebDriver executing javascript strange behaviour - javascript

I'm using Webdriver through JBehave-Web distribution (3.3.4) to test an application and I'm facing something quite strange:
I'm trying to interact with a modalPanel from Richfaces, which gave me a lot of problems because it throws ElementNotVisibleException. I solved it by using javascript:
This is the code in my page object which extends from org.jbehave.web.selenium.WebDriverPage
protected void changeModalPanelInputText(String elementId, String textToEnter){
makeNonLazy();
JavascriptExecutor je = (JavascriptExecutor) webDriver();
String script ="document.getElementById('" + elementId + "').value = '" + textToEnter + "';";
je.executeScript(script);
}
The strange behaviour is that if I execute the test normally, it does nothing, but if I put a breakpoint in the last line (in Eclipse), select the line and execute from Eclipse (Ctrl + U), I can see the changes in the browser.
I checked the JavascriptExecutor and the WebDriver classes to see if there was any kind of buffering, but I couldn't find anything. Any ideas?
EDIT
I found out that putting the thread to sleep for 1 second it makes it work, so it looks some kind of race condition, but cannot find out why...
This is the way it "works", but I'm not very happy about it:
protected void changeModalPanelInputText(String elementId, String textToEnter){
String script ="document.getElementById('" + elementId + "').value = '" + textToEnter + "';";
executeJavascript(script);
}
private void executeJavascript(String script){
makeNonLazy();
JavascriptExecutor je = (JavascriptExecutor) webDriver();
try {
Thread.sleep(1500);
} catch (InterruptedException e) {
e.printStackTrace();
}
je.executeScript(script);
}
Putting the wait in any other position doesn't work either...

First idea:
Ensure that target element is initialized and enumerable. See if this returns null:
Object objValue = je.executeScript(
"return document.getElementById('"+elementId+"');");
Since you're using makeNonLazy(), probably just add the target as a WebElement member of your Page Object (assuming Page Factory type of initialization in JBehave).
Second idea:
Explicitly wait for the element to be available before mutating:
/**
* re-usable utility class
*/
public static class ElementAvailable implements Predicate<WebDriver> {
private static String IS_NOT_UNDEFINED =
"return (typeof document.getElementById('%s') != 'undefined');";
private final String elementId;
private ElementAvailable(String elementId) {
this.elementId = elementId;
}
#Override
public boolean apply(WebDriver driver) {
Object objValue = ((JavascriptExecutor)driver).executeScript(
String.format(IS_NOT_UNDEFINED, elementId));
return (objValue instanceof Boolean && ((Boolean)objValue));
}
}
...
protected void changeModalPanelInputText(String elementId, String textToEnter){
makeNonLazy();
// wait at most 3 seconds before throwing an unchecked Exception
long timeout = 3;
(new WebDriverWait(webDriver(), timeout))
.until(new ElementAvailable(elementId));
// element definitely available now
String script = String.format(
"document.getElementById('%s').value = '%s';",
elementId,
textToEnter);
((JavascriptExecutor) webDriver()).executeScript(script);
}

Related

Wait for script to be done before switching to a frame in selenium

I'm trying to switch to an iframe, but, i must wait for a script to be done before switching.
As i asked here : Selenium switch to the wrong iframe when closing another one and answered it, i can force the switch to the parent frame and use the same idea to switch to a frame with or without any script.
The problem is that forcing the switch isn't a good way to do it. It's similar as using a Thread.sleep(500) because i know that it take approximately 500ms to execute the script.
So i'm trying to wait for jQuery and JavaScript to be done by using :
// Wait for JS
public static ExpectedCondition<Boolean> waitForJSToLoad() {
return new ExpectedCondition<Boolean>() {
#Override
public Boolean apply(final WebDriver driver) {
return ((JavascriptExecutor) driver).executeScript("return document.readyState").toString().equals("complete");
}
};
}
// Wait for jQuery
public static ExpectedCondition<Boolean> waitForAjaxResponse() {
return new ExpectedCondition<Boolean>() {
#Override
public Boolean apply(final WebDriver driver) {
try {
final JavascriptExecutor js = (JavascriptExecutor) driver;
return js.executeScript(
"return((window.jQuery == null )||(window.jQuery != null && jQuery.active === 0))").equals(true);
} catch (final WebDriverException e) {
LOG.info("Error while waiting for jQuery");
return false;
}
}
};
}
But all the return statement are true, so i get a TimeOutException because JS and jQuery is already done when i switch to the frame but it's false.
The script looks like that :
function _optimizeSize() {
var $popin = $('#iframePopin'),
// Some var init
$dialog.transition({top: optimizeTop});
$popin.transition({height: contentHeight + actionBarHeight + popinPaddingHeight});
$iFrame.transition({height: contentHeight});
}
I looked at some similar case, but i couldn't find any answer to my problem.

Call Java method from JavaScript and return value to JavaScript

There are few SO questions and other articles about calling Java class method from Javascript but all of them deals with java method with return type void.
Here is what I am trying to achieve:
There are 2 strings to be displayed in WebView - say Yes and No. But they needs to be localized and hence I want to get the string value from Java method rather than using multiple JS for each locale.
Here's the code sample:
Java class
onCreate(){
//Some code
contentWebView.addJavascriptInterface(new CalculatorJavaScriptCallInterface(), "calculatorjavascriptcallinterface");
//Some code
}
String localizedString = "";
private class CalculatorJavaScriptCallInterface {
CalculatorJavaScriptCallInterface() {
}
#JavascriptInterface
public String getLocalizedString(final int stringId) {
localizedString = getResources().getString(stringId);
Toast.makeText(getActivity(), "localizedString :: " + localizedString, Toast.LENGTH_SHORT).show();
return localizedString;
}
}
Javascript file
function Checkboxpicker(element, options) {
//Some code
this.options = $.extend({}, $.fn.checkboxpicker.defaults, options, this.$element.data());
}
$.fn.checkboxpicker.defaults = {
//EXISTING STRINGS
//offLabel: 'No',
//onLabel: 'Yes',
offLabel: window.calculatorjavascriptcallinterface.getLocalizedString("Consult.JSSupport.checkbox.selected"),
onLabel: window.calculatorjavascriptcallinterface.getLocalizedString("Consult.JSSupport.checkbox.notSelected"),
};
I am getting blank string as output when I run above code.
Here are some notes:
This Javascript is being used appropriately as it works if I use
hard-coded strings
Respective stings have been defined in
string.xml
I tried using calculatorjavascriptcallinterface in
Camel case and lower case both
I tried with and without window.
to call Java method
Tried returning hard-coded value from Java
method - IT IS WORKING THIS WAY
Any suggestions will be appreciated. Thanks in advance!
EDIT
I'm getting following error even though the string is present in strings.xml:
No package identifier when getting value for resource number 0x00000000
android.content.res.Resources$NotFoundException: String resource ID #0x0
It looks like issue with getting the String with proper Id. change your getLocalizedString as follows:
#JavascriptInterface
public String getLocalizedString(final String stringId) {
localizedString = getResources().getString(getResources().getIdentifier(stringId,"string",getContext().getPackageNam‌​e()));
Toast.makeText(getActivity(), "localizedString :: " + localizedString, Toast.LENGTH_SHORT).show();
return localizedString;
}
1.)
mWebView.getSettings().setJavaScriptEnabled(true);
mWebView.getSettings().setDomStorageEnabled(true);
mWebView.addJavascriptInterface(new WebViewInterface(this), "Android");
mWebView.loadUrl("url or html file path");
2.)
public class WebViewInterface {
Context mContext;
WebViewInterface(Context mContext) {
this.mContext = mContext;
}
#JavascriptInterface
public void performAction(String pro_cat_id) {
//write your code here to perform any action.
}
3.)
<html>
<head>
<script type="text/javascript" src="/js/MyJS.js"></script>
</head>
<body>
<button onClick="mClick('1');">Yes</button>
<button onClick="mClick('0');">No</button>
</body>
</html>
4.) javascript: MyJS.js
function mClick(mValue)
{
Android.performAction(mId);
}

Having trouble returning Javascript Object Selenium C#

So basically what I am trying to do is setup a proxy to intercept my call to a website and put a script tag in the header to catch javascript bugs using fiddler's proxy library. Which looks like this:
<script>
window.__webdriver_javascript_errors = [];
window.onerror = function(errorMsg, url, line)
{ window.__webdriver_javascript_errors.push(errorMsg + ' (found at ' + url + ', line ' + line + ')'); };
</script>
That all works great and it is catching bugs before the page loads. My issue is when I go to the page I can't actually return the javascript object from the page.
public static IList<string> GetJavaScriptErrors(IWebDriver driver, TimeSpan timeout)
{
string errorRetrievalScript = "var errorList = window.__webdriver_javascript_errors; window.__webdriver_javascript_errors = []; return errorList;";
DateTime endTime = DateTime.Now.Add(timeout);
List<string> errorList = new List<string>();
IJavaScriptExecutor executor = driver as IJavaScriptExecutor;
List<object> returnedList = executor.ExecuteScript(errorRetrievalScript) as List<object>;
while (returnedList == null && DateTime.Now < endTime)
{
System.Threading.Thread.Sleep(250);
returnedList = executor.ExecuteScript(errorRetrievalScript) as List<object>;
}
if (returnedList == null)
{
return null;
}
else
{
foreach (object returnedError in returnedList)
{
errorList.Add(returnedError.ToString());
}
}
return errorList;
}
Now when I run this, my returnedList never ever gets the errorRetrievalScript returned to it. I cannot seem to figure out why I always get null returned.
The weirdness is, before I run the executor for javascript if I go to Firefox and type in
window.__webdriver_javascript_errors
All the errors show up just fine, but the second I hit that executor the errors vanish, which is what I want to happen, and that works! But the return never returns anything.
What am i doing wrong?
EDIT:
The selenium, and browsers versions I am using are:
Firefox: 47.0.1
Chrome: 51.0.2704.103
IE: 11.420.10586.0
Selenium: 2.53.1

Accessing Shadow DOM tree with Selenium

Is it possible to access elements within a Shadow DOM using Selenium/Chrome webdriver?
Using the normal element search methods doesn't work, as is to be expected. I've seen references to the switchToSubTree spec on w3c, but couldn't locate any actual docs, examples, etc.
Anyone had success with this?
The accepted answer is no longer valid and some of the other answers have some drawbacks or are not practical (the /deep/ selector doesn't work and is deprecated, document.querySelector('').shadowRoot works only with the first shadow element when shadow elements are nested), sometimes the shadow root elements are nested and the second shadow root is not visible in document root, but is available in its parent accessed shadow root. I think is better to use the selenium selectors and inject the script just to take the shadow root:
def expand_shadow_element(element):
shadow_root = driver.execute_script('return arguments[0].shadowRoot', element)
return shadow_root
outer = expand_shadow_element(driver.find_element_by_css_selector("#test_button"))
inner = outer.find_element_by_id("inner_button")
inner.click()
To put this into perspective I just added a testable example with Chrome's download page, clicking the search button needs open 3 nested shadow root elements:
import selenium
from selenium import webdriver
driver = webdriver.Chrome()
def expand_shadow_element(element):
shadow_root = driver.execute_script('return arguments[0].shadowRoot', element)
return shadow_root
driver.get("chrome://downloads")
root1 = driver.find_element_by_tag_name('downloads-manager')
shadow_root1 = expand_shadow_element(root1)
root2 = shadow_root1.find_element_by_css_selector('downloads-toolbar')
shadow_root2 = expand_shadow_element(root2)
root3 = shadow_root2.find_element_by_css_selector('cr-search-field')
shadow_root3 = expand_shadow_element(root3)
search_button = shadow_root3.find_element_by_css_selector("#search-button")
search_button.click()
Doing the same approach suggested in the other answers has the drawback that it hard-codes the queries, is less readable and you cannot use the intermediary selections for other actions:
search_button = driver.execute_script('return document.querySelector("downloads-manager").shadowRoot.querySelector("downloads-toolbar").shadowRoot.querySelector("cr-search-field").shadowRoot.querySelector("#search-button")')
search_button.click()
It should also be noted that the Selenium binary Chrome driver now supports Shadow DOM (since Jan 28, 2015) : http://chromedriver.storage.googleapis.com/2.14/notes.txt
Unfortunately it looks like the webdriver spec does not support this yet.
My snooping uncovered :
http://www.w3.org/TR/webdriver/#switching-to-hosted-shadow-doms
https://groups.google.com/forum/#!msg/selenium-developers/Dad2KZsXNKo/YXH0e6eSHdAJ
I am using C# and Selenium and managed to find an element inside a nestled shadow DOM using java script.
This is my html tree:
html tree
I want the url on the last line and to get it I first select the "downloads-manager" tag and then the first shadow root right below it.
Once inside the shadow root I want to find the element closest to the next shadow root. That element is "downloads-item". With that selected I can enter the second shadow root. From there I select the img item containing the url by id = "file-icon". At last I can get the attribute "src" which contains the url I am seeking.
The two lines of C# code that does the trick:
IJavaScriptExecutor jse2 = (IJavaScriptExecutor)_driver;
var pdfUrl = jse2.ExecuteScript("return document.querySelector('downloads-manager').shadowRoot.querySelector('downloads-item').shadowRoot.getElementById('file-icon').getAttribute('src')");
Normally you'd do this:
element = webdriver.find_element_by_css_selector(
'my-custom-element /deep/ .this-is-inside-my-custom-element')
And hopefully that'll continue to work.
However, note that /deep/ and ::shadow are deprecated (and not implemented in browsers other than Opera and Chrome). There's much talk about allowing them in the static profile. Meaning, querying for them will work, but not styling.
If don't want to rely on /deep/ or ::shadow because their futures are a bit uncertain, or because you want to work better cross-browser or because you hate deprecation warnings, rejoice as there's another way:
# Get the shadowRoot of the element you want to intrude in on,
# and then use that as your root selector.
shadow_root = webdriver.execute_script('''
return document.querySelector(
'my-custom-element').shadowRoot;
''')
element = shadow_root.find_element_by_css_selector(
'.this-is-inside-my-custom-element')
More about this:
https://github.com/w3c/webcomponents/issues/78
https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/68qSZM5QMRQ/pT2YCqZSomAJ
I found a much easier way to get the elements from Shadow Dom.
I am taking the same example given above, for search icon of Chrome Download Page.
IWebDriver driver;
public IWebElement getUIObject(params By[] shadowRoots)
{
IWebElement currentElement = null;
IWebElement parentElement = null;
int i = 0;
foreach (var item in shadowRoots)
{
if (parentElement == null)
{
currentElement = driver.FindElement(item);
}
else
{
currentElement = parentElement.FindElement(item);
}
if(i !=(shadowRoots.Length-1))
{
parentElement = expandRootElement(currentElement);
}
i++;
}
return currentElement;
}
public IWebElement expandRootElement(IWebElement element)
{
IWebElement rootElement = (IWebElement)((IJavaScriptExecutor)driver)
.ExecuteScript("return arguments[0].shadowRoot", element);
return rootElement;
}
Google Chrome Download Page
Now as shown in image we have to expand three shadow root elements in order to get our search icon.
To to click on icon all we need to do is :-
[TestMethod]
public void test()
{
IWebElement searchButton= getUIObject(By.CssSelector("downloads-manager"),By.CssSelector("downloads-toolbar"),By.Id("search-input"),By.Id("search-buton"));
searchButton.Click();
}
So just one line will give you your Web Element, just need to make sure you pass first shadow root element as first argument of the function "getUIObject" second shadow root element as second argument of the function and so on, finally last argument for the function will be the identifier for your actual element (for this case its 'search-button')
Until Selenium supports shadow DOM out of the box, you can try the following workaround in Java. Create a class that extends By class:
import org.openqa.selenium.By;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.SearchContext;
import org.openqa.selenium.WebDriverException;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.WrapsDriver;
import org.openqa.selenium.internal.FindsByCssSelector;
import java.io.Serializable;
import java.util.List;
public class ByShadow {
public static By css(String selector) {
return new ByShadowCss(selector);
}
public static class ByShadowCss extends By implements Serializable {
private static final long serialVersionUID = -1230258723099459239L;
private final String cssSelector;
public ByShadowCss(String cssSelector) {
if (cssSelector == null) {
throw new IllegalArgumentException("Cannot find elements when the selector is null");
}
this.cssSelector = cssSelector;
}
#Override
public WebElement findElement(SearchContext context) {
if (context instanceof FindsByCssSelector) {
JavascriptExecutor jsExecutor;
if (context instanceof JavascriptExecutor) {
jsExecutor = (JavascriptExecutor) context;
} else {
jsExecutor = (JavascriptExecutor) ((WrapsDriver) context).getWrappedDriver();
}
String[] subSelectors = cssSelector.split(">>>");
FindsByCssSelector currentContext = (FindsByCssSelector) context;
WebElement result = null;
for (String subSelector : subSelectors) {
result = currentContext.findElementByCssSelector(subSelector);
currentContext = (FindsByCssSelector) jsExecutor.executeScript("return arguments[0].shadowRoot", result);
}
return result;
}
throw new WebDriverException(
"Driver does not support finding an element by selector: " + cssSelector);
}
#Override
public List<WebElement> findElements(SearchContext context) {
if (context instanceof FindsByCssSelector) {
JavascriptExecutor jsExecutor;
if (context instanceof JavascriptExecutor) {
jsExecutor = (JavascriptExecutor) context;
} else {
jsExecutor = (JavascriptExecutor) ((WrapsDriver) context).getWrappedDriver();
}
String[] subSelectors = cssSelector.split(">>>");
FindsByCssSelector currentContext = (FindsByCssSelector) context;
for (int i = 0; i < subSelectors.length - 1; i++) {
WebElement nextRoot = currentContext.findElementByCssSelector(subSelectors[i]);
currentContext = (FindsByCssSelector) jsExecutor.executeScript("return arguments[0].shadowRoot", nextRoot);
}
return currentContext.findElementsByCssSelector(subSelectors[subSelectors.length - 1]);
}
throw new WebDriverException(
"Driver does not support finding elements by selector: " + cssSelector);
}
#Override
public String toString() {
return "By.cssSelector: " + cssSelector;
}
}
}
And you can use it without writing any additional functions or wrappers. This should work with any kind of framework. For example, in pure Selenium code this would look like this:
WebElement searchButton =
driver.findElement(ByShadow.css(
"downloads-manager >>> downloads-toolbar >>> cr-search-field >>> #search-button"));
or if you use Selenide:
SelenideElement searchButton =
$(ByShadow.css("downloads-manager >>> downloads-toolbar >>> cr-search-field >>> #search-button"));
This worked for me (using selenium javascript bindings):
driver.executeScript("return $('body /deep/ <#selector>')")
That returns the element(s) you're looking for.
For getting the filename of the latest downloaded file in Chrome
def get_downloaded_file(self):
filename = self._driver.execute_script("return document.querySelector('downloads-manager').shadowRoot.querySelector('#downloadsList downloads-item').shadowRoot.querySelector('div#content #file-link').text")
return filename
Usage:
driver.get_url('chrome://downloads')
filename = driver.get_downloaded_file()
And for configuring the option for setting the default download directory in selenium for chrome browser, where the corresponding file could be gotten:
..
chrome_options = webdriver.ChromeOptions()
..
prefs = {'download.default_directory': '/desired-path-to-directory'} # unix
chrome_options.add_experimental_option('prefs', prefs)
..

infinite scroll in android webview

i have some local html file and i want to show them with infinite scroll method.
NOTE: i cant change the html content, so please don't advice to add javascript to them. i must do it in run time.
so, i figured out that i can execute javascript in runtime via loadUrl("javascript: ....").
i overrided onOverScrolled() method of webView to find out when user reach the end of webView. (it acting carefully, so the problem is not here)
the problem is some times new content attached successfully and other times it didn't geting attached.
in the log i can see that the end of page method get triggered, retrieving new html body get called, executing javascript code get called, but it did not affect.
here is my code, may be something went wrong and i can not see it:
#Override
protected void onOverScrolled(int scrollX, int scrollY, boolean clampedX, boolean clampedY)
{
super.onOverScrolled(scrollX, scrollY, clampedX, clampedY);
if(clampedY & reloadFlag) //for first time realodFlag is false, when the WebViewClient.onPageFinished() get called it turn to ture
{
if (!(isVerticalScrollPossible(SCROLL_DOWN)))
{
reloadFlag = false;
currUri = nextResource(currUri); //findout next page
appendNextPage();
}
}
}
private final int SCROLL_DOWN = 1;
private final int SCROLL_UP = -1;
private boolean isVerticalScrollPossible(int direction)
{
final int offset = computeVerticalScrollOffset();
final int range = computeVerticalScrollRange() - computeVerticalScrollExtent();
if (range == 0) return false;
if (direction < 0) {
return offset > 0;
} else {
return offset < range - 1;
}
}
public String getNextPageJS(Uri currPage)
{
String body = getNextPageBody(currPage);
//Log.d("myTAG", body);
String jsResult = "javascript:(function() { document.body.innerHTML += '<div id=\"separator\" style=\"height:10px; margin-top:10px; margin-bottom:10px; background-color:#000000;\"></div>" + body + "';})()";
return jsResult;
}
private void appendNextPage()
{
reloadFlag = false;
Thread appendThread = new Thread(null, doAppend, "backgroundAppend");
appendThread.start();
Log.i("appendNextPage", "get called");
}
public String rs = "";
private Runnable doAppend = new Runnable()
{
#Override
public void run()
{
Log.i("doAppend", "get called + currUri: " + currUri);
rs = getNextPageJS(currUri);
//loadUrl(rs);
appendHandler.sendEmptyMessage(0);
}
};
private Handler appendHandler = new Handler()
{
public void handleMessage(Message msg)
{
loadUrl(rs);
reloadFlag = true;
Log.i("appendHandler", "get called");
}
};
NOTE: sometimes i get this in the emulator log (not in real device):
I/chromium(1339): [INFO:CONSOLE(1)] "Uncaught SyntaxError: An invalid or illegal string was specified.", source: http://localhost:1025/OEBPS/Text/Section0042.xhtml (1)
the number of page is different from time to time, may be it's for bad javasccript code, i don't know.
hints:
1) i'm not javascript coder, so may be the javascript code is not good
2) or maybe calling javascript code several times cause this problem
3) i know that javascript code must execute after page loading completely, so maybe the code called too soon, the problem for this is that onPageFinished() getting called just for first page and it does not called when new content attached via javascript code, i tried to solve this problem using thread, and i think it worked.
UPDATE: i figured out that this code works fine when the html body is small, but when i try to attach large body it didn't work. is loadUrl() method has char limit? or any other idea?
OK, i found the problem, if anyone wants to know.
the problem is that the loadUri() (at least in my case) can not load too many html tag at once (in javascript code i written)
so, the solution is easy, load tags one by one.
here is the code i used:
public ArrayList<String> getNextPageBody(Uri currAddress)
{
String html = getHtml(currAddress); // this is the all html tags in the next file
//get body elements as arrayList, using jsoup
Document doc = Jsoup.parse(html);
Elements elements = doc.select("body").first().children();
ArrayList<String> chuncks = new ArrayList<String>();
for (org.jsoup.nodes.Element el : elements)
{
chuncks.add(el.toString());
}
return chuncks;
}
public void loadBodyChunk(ArrayList<String> bodyChunks)
{
//show a separator for each page
bodyChunks.add(0, "javascript:(function() { document.body.innerHTML += '<div id=\"separator\" style=\"height:10px; margin-top:10px; margin-bottom:10px; background-color:#000000;\"></div>';}())");
loadUrl(bodyChunks.get(0));
for(int i = 1; i < bodyChunks.size(); i++)
{
String jsResult = "javascript:(function() { document.body.innerHTML += '" + bodyChunks.get(i) + "';}())";
loadUrl(jsResult);
}
reloadFlag = true;
}
EDIT:
also:
first the 's in String should be replaced with \' :
body = body.replace("'", "\\'");
then all newline char should be eliminated:
body = body.replaceAll(System.getProperty("line.separator"), " ");
all problem solved.

Categories

Resources