I am using Selenium RC with IE 6 and XPath locators are terribly
slow.
So I am trying to see if javascript-xpath actually speeds up things.
But could not find enough/clear documentation on how to use native x-
path libraries.
I am doing the following:
protected void startSelenium (String testServer, String appName, String testInBrowser){
selenium = new DefaultSelenium("localhost", 4444, "*" +testInBrowser, testServer+ "/"+ appName + "/");
echo("selenium instance created:"+selenium.getClass());
selenium.start();
echo("selenium instance started..." + testServer + "/" + appName +"/");
selenium.runScript("lib/javascript-xpath-latest-cmp.js");
selenium.useXpathLibrary("javascript-xpath");
selenium.allowNativeXpath("true");
}
This results in speed improvement of XPath locator but the
improvements are not consistent. On some runs the time taken for a
locator is halved; while sometimes its randomly high.
Am I missing any configuration step here? Would be great if someone
who has had success with this could share their views and approach.
Thanks,
Nirmal
Solution:
protected void startSelenium (String testServer, String appName, String testInBrowser){
selenium = new DefaultSelenium("localhost", 4444, "*" +testInBrowser, testServer+ "/"+ appName + "/");
echo("selenium instance created:"+selenium.getClass());
selenium.start();
echo("selenium instance started..." + testServer + "/" + appName +"/");
selenium.useXpathLibrary("javascript-xpath");
}
I implemented this myself and I only had to do selenium.useXpathLibrary("javascript-xpath"). In my tests, the javascript xpath was about 7x faster on IE 8. Haven't really tested on anything else, but we only use it for IE.
I have never done this but think that you may need to do something like
//Add the library to the page since runScript just does an eval on the JS
selenium.runScript("document.body.append(document.createElement('script')).src = 'path/to/lib');");
selenium.useXpathLibrary("javascript-xpath");
selenium.allowNativeXpath("true");
You will need to add the library to the page and then load it.
However, I would recommend using CSS Selectors instead of XPath Selectors as they are a lot faster in Selenium. You can see how to use different locator strategies here. I have seen tests become at least twice as fast as the original XPath.
Related
I've searched several posts on this and the methods all would normally work, however...
I'm in an environment where we use 3 different servers (Dev, UAT and Prod). All 3 have differing URL structures:
http://dev.com/myusername/applicationname (Dev)
http://uat.com/applicationname (UAT)
http://prod.com/applicationname (Prod)
The issue that I'm having is when I try to use a dropdown I'm having problems getting the url right. The issue is when I use the following code to populate my dropdown:
#Html.DropDownList("Owners", ViewData["Owners"] as SelectList, new { onchange = "document.location.href='/Builds/' + this.options[this.selectedIndex].value;" })
It handles the event just fine, but produces a URL of:
http://dev.com/Builds/value. I need it to be http://dev.com/myusername/application/Builds/value.
Any ideas on how I can accomplish this?
Get the base uri then append your stuff to that:
string baseUrl = Request.Url.Scheme + "://" + Request.Url.Authority +
Request.ApplicationPath.TrimEnd('/') + "/";
Note that the trim is there because it may or may not end with the trailing slash depending on where it is hosted (root or sub directory)
You might also be able to use
HttpContext.Current.Server.MapPath("~/Builds/")
This might also work:
#HttpContext.Current.Request.Url.GetLeftPart(UriPartial.Authority)#Url.Content("~/Builds/")
I am having some difficulty aligning my paths without a hardcode in javascript. I am running an asp.net MVC3 web application.
If my path is of the form
var url = 'http://serverNameHardcode/websiteNameHardcode/service/service?param1=' + param;
Then things work fine when I do
$.get(url,
{},
function (data) {alert('callback success');},'json');
I would like to create a relative path. I tried
var url = 'service/service?param1=' + param;
And this works when I run locally and also in Firefox, but not in IE7. When I publish to the server without the hardcode the callback never fires. I know MVC-3 adds some complexity to routing, but I do not know if it applies to this situation; so, I marked this question as such.
How should I setup my path so I don't need hardcodes?
Just write out the app path as a global js variable from your master view, then compose links as
APPPATH + "path/whatever"
Just had to solve this for one of my jQuery plugins, where it is preferable not to modify anything global (i.e. outside the scope of the plugin use) so I had to disregard the marked answer.
I also found that because I host DEV locally in IIS I could not use a root-relative path (as localhost is not the root).
The solution I came up with extended what I had already started with: a data-controller attribute specifying which controller to use in the element I am applying my plugin to. I find it preferable to data-drive the controller names so the components can be more easily reused.
Previous:
<div data-controller="Section">
Solution:
<div data-controller="#Url.Content("~/Section")">
This injects the server root (e.g. /Test.WindowsAzure.Apr2014/ before the controller name so I wind up with /Test.WindowsAzure.Apr2014/Section which is perfect for then appending actions and other parameters as you have. It also avoids having an absolute path in the output (which takes up extra bytes for no good reason).
In your case use something like:
// Assuming $element points to the element your plugin/code is attached to...
var baseUrl = $element.data('controller');
var url = baseUrl + '/service?param1=' + param;
Update:
Another approach we now use, when we do not mind injecting a global value, is Razor-inject a single global JavaScript variable onto window in the layout file with:
<script>
window.SiteRoot = "#Url.Content("~/")";
</script>
and use it with
var url = window.SiteRoot + '/service?param1=' + param;
One option:
var editLink = '#Url.Action("_EditActivity", "Home")';
$('#activities').load(editLink + "?activityID=" + id);
another example:
var actionURL = '#Url.Action("_DeleteActivity", "Home")';
$('#activities').load(actionURL + "?goalID=" + gID + "&activityID=" + aID);
If you don't need to add to the string:
$('#activities').load('#Url.Action("_Activities", "Home", new { goalID = Model.goalID},null)');
I really need the path to get this to work, maybe its IE7. Who knows. But this worked for me.
Grab the URL and store it somewhere. I chose to implement the data attribute from HTML5.
<div id="websitePath" data-websitePath='#Request.Url.GetLeftPart(System.UriPartial.Authority)#Request.ApplicationPath'></div>
Then when you need to perform some AJAX or otherwise use a URL in javascript you simply refer to the stored value. Also, there are differences in the versions of IIS (not cool if your devbox is IIS5 and your server is IIS7). #Request.ApplicationPath may or may not come back with a '/' appended to the end. So, as a workaround I also trim the last character if it is /. Then include / as part of the url.
var urlprefix = $('#websitePath').data('websitepath');
urlprefix = urlprefix.replace(/\/$/, "");
var url = urlprefix + '/service/service?param1=' + param;
While the accepted answer is correct I would like to add a suggestion (i.e. how I do it).
I am using MVC, and any ajax request goes to a controller. My controllers have services so if a service call is required the controller will take of that.
So what's my point? So if ajax always communicates with a controller, then i would like to let the MVC routing resolve the path for me. So what I write in Javascript for url is something like this:
url: 'controller/action'
This way there is no need for the root path etc...
Also, you can put this in a separate Javascript file and it will also work whereas #Url.Content will need to be called on the view.
I have been trying to create a simple script in FireWatir that will convert the entire current document DOM's (including javascript generated code) to XML representation .
following leads on the web I've came up with this script
require 'rubygems'
require 'firewatir'
browser = Watir::Browser.new
browser.goto('http://www.google.com/')
browser.text_field(:id, 'lst-ib').set('hello')
browser.button(:name, 'btnG').click
puts browser.execute_script("new XMLSerializer().serializeToString(document)")
however, running it in Firefox 3.6 , resulted in this error :
c:/Ruby192/lib/ruby/gems/1.9.1/gems/firewatir-1.9.2/lib/firewatir/jssh_socket.rb
:19:in js_eval': XMLSerializer is not defined (JsshSocket::JSReferenceError)
from c:/Ruby192/lib/ruby/gems/1.9.1/gems/firewatir-1.9.2/lib/firewatir/firefox.rb:136:inexecute_script' from test.rb:9:in `'
if I enter this line:
javascript:window.open('aout:blank').document.write('<pre>' + unescape((new XMLSerializer()).serializeToString(document).replace(/</g, '<')) + '</pre>')
into FF location box, I get a page with the desired XML. so XMLSerializer has to be defined somewhere, its just seems out of reach for my JS code.
how can I get this to work?
Not sure what you mean by "location box", but if that is address bar (the one that says http://stackoverflow.com/... at this page), then try this:
browser.goto "javascript:window.open('aout:blank').document.write('<pre>' + unescape((new XMLSerializer()).serializeToString(document).replace(/</g, '<')) + '</pre>')"
A t the core of it, I suspect this might be an FF thing to do with boundaries of the 'sandbox' that javascript is running in. The browser itself may know about the serializer, but not choose to give javascript any access to it.
However, there may be more than one way to skin the cat. If your second bit of code provides you with a page that is rendered as text in XML syntax, why not do that first, and then just use the resulting page via
puts browser.text
If you are developing an extension for one of the mozilla applications (e.g. Firefox, Thunderbird, etc.) you define a extension id in the install.rdf.
If for some reason you need to know the extension id e.g. to retrieve the extension dir in local file system (1) or if you want to send it to a webservice (useage statistic) etc. it would be nice to get it from the install.rdf in favour to have it hardcoded in your javascript code.
But how to access the extension id from within my extension?
1) example code:
var extId = "myspecialthunderbirdextid#mydomain.com";
var filename = "install.rdf";
var file = extManager.getInstallLocation(extId).getItemFile(extId, filename);
var fullPathToFile = file.path;
I'm fairly sure the 'hard-coded ID' should never change throughout the lifetime of an extension. That's the entire purpose of the ID: it's unique to that extension, permanently. Just store it as a constant and use that constant in your libraries. There's nothing wrong with that.
What IS bad practice is using the install.rdf, which exists for the sole purpose of... well, installing. Once the extension is developed, the install.rdf file's state is irrelevant and could well be inconsistent.
"An Install Manifest is the file an Add-on Manager-enabled XUL application uses to determine information about an add-on as it is being installed" [1]
To give it an analogy, it's like accessing the memory of a deleted object from an overflow. That object still exists in memory but it's not logically longer relevant and using its data is a really, really bad idea.
[1] https://developer.mozilla.org/en/install_manifests
Like lwburk, I don't think its available through Mozilla's API's, but I have an idea which works, but it seems like a complex hack. The basic steps are:
Set up a custom resource url to point to your extension's base directory
Read the file and parse it into XML
Pull the id out using XPath
Add the following line to your chrome.manifest file
resource packagename-base-dir chrome/../
Then we can grab and parse the file with the following code:
function myId(){
var req = new XMLHttpRequest();
// synchronous request
req.open('GET', "resource://packagename-base-dir/install.rdf", false);
req.send(null);
if( req.status !== 0){
throw("file not found");
}
var data = req.responseText;
// this is so that we can query xpath with namespaces
var nsResolver = function(prefix){
var ns = {
"rdf" : "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
"em" : "http://www.mozilla.org/2004/em-rdf#"
};
return ns[prefix] || null;
};
var parser = CCIN("#mozilla.org/xmlextras/domparser;1", Ci.nsIDOMParser);
var doc = parser.parseFromString(data, "text/xml");
// you might have to change this xpath expression a bit to fit your setup
var myExtId = doc.evaluate("//em:targetApplication//em:id", doc, nsResolver,
Ci.nsIDOMXPathResult.FIRST_ORDERED_NODE_TYPE, null);
return myExtId.singleNodeValue.textContent;
}
I chose to use a XMLHttpRequest(as opposed to simply reading from a file) to retrieve the contents since in Firefox 4, extensions aren't necessarily unzipped. However, XMLHttpRequest will still work if the extension remains packed (haven't tested this, but have read about it).
Please note that resource URL's are shared by all installed extensions, so if packagename-base-dir isn't unique, you'll run into problems. You might be able to leverage Programmatically adding aliases to solve this problem.
This question prompted me to join StackOverflow tonight, and I'm looking forward participating more... I'll be seeing you guys around!
As Firefox now just uses Chrome's WebExtension API, you can use #serg's answer at How to get my extension's id from JavaScript?:
You can get it like this (no extra permissions required) in two
different ways:
Using runtime api: var myid = chrome.runtime.id;
Using i18n api: var myid = chrome.i18n.getMessage("##extension_id");
I can't prove a negative, but I've done some research and I don't think this is possible. Evidence:
This question, which shows that
the nsIExtensionManager interface
expects you to retrieve extension
information by ID
The full nsIExtensionManager interface
description, which shows no
method that helps
The interface does allow you to retrieve a full list of installed extensions, so it's possible to retrieve information about your extension using something other than the ID. See this code, for example:
var em = Cc['#mozilla.org/extensions/manager;1']
.getService(Ci.nsIExtensionManager);
const nsIUpdateItem = Ci.nsIUpdateItem;
var extension_type = nsIUpdateItem.TYPE_EXTENSION;
items = em.getItemList(extension_type, {});
items.forEach(function(item, index, array) {
alert(item.name + " / " + item.id + " version: " + item.version);
});
But you'd still be relying on hardcoded properties, of which the ID is the only one guaranteed to be unique.
Take a look on this add-on, maybe its author could help you, or yourself can figure out:
[Extension Manager] Extended is very
simple to use. After installing, just
open the extension manager by going to
Tools and the clicking Extensions. You
will now see next to each extension
the id of that extension.
(Not compatible yet with Firefox 4.0)
https://addons.mozilla.org/firefox/addon/2195
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
force browsers to get latest js and css files in asp.net application
I'm working with someone else's code, so I don't know the whole picture, and I don't even know MVC that well, but here's the problem...
In Site.Master there's a
<%= Html.IncludeJs("ProductPartial")%>
which produces this line in the final mark-up
<script type="text/javascript" src="/Scripts/release/ProductPartial.js"></script>
I made some changes in the JS file, but the old one is obviously cached by the browser, so the changes won't show up until the user refreshes. The usual workaround is to add a version tag at the end of the script source path, but I'm not sure how to do that in this case.
Any suggestions?
Why not write your own Html helper extension method, and make it output the version number of your application assembly? Something along these lines should do the trick:
public static MvcHtmlString IncludeVersionedJs(this HtmlHelper helper, string filename)
{
var version = Assembly.GetExecutingAssembly().GetName().Version;
return MvcHtmlString.Create(filename + "?v=" + version);
}
You can then increment the version number of the assembly whenever you release a new version to your users, and their caches will be invalidated across the application.
I solved this by tacking a last modified timestamp as a query parameter to the scripts.
I did this with an extension method, and using it in my CSHTML files. Note: this implementation caches the timestamp for 1 minute so we don't thrash the disk quite so much.
Here is the extension method:
public static class JavascriptExtension {
public static MvcHtmlString IncludeVersionedJs(this HtmlHelper helper, string filename) {
string version = GetVersion(helper, filename);
return MvcHtmlString.Create("<script type='text/javascript' src='" + filename + version + "'></script>");
}
private static string GetVersion(this HtmlHelper helper, string filename)
{
var context = helper.ViewContext.RequestContext.HttpContext;
if (context.Cache[filename] == null) {
var physicalPath = context.Server.MapPath(filename);
var version = "?v=" +
new System.IO.FileInfo(physicalPath).LastWriteTime
.ToString("yyyyMMddhhmmss");
context.Cache.Add(physicalPath, version, null,
DateTime.Now.AddMinutes(1), TimeSpan.Zero,
CacheItemPriority.Normal, null);
context.Cache[physicalPath] = version;
return version;
}
else {
return context.Cache[filename] as string;
}
}
And then in the CSHTML page:
#Html.IncludeVersionedJs("/MyJavascriptFile.js")
In the rendered HTML, this appears as:
<script type='text/javascript' src='/MyJavascriptFile.ks?20111129120000'></script>
Here are some links already on this topic:
Why do some websites access specific versions of a CSS or JavaScript file using GET parameters?
force browsers to get latest js and css files in asp.net application
Your version strategy really isn't important. As long as the file name is different, the browser will be forced to get the new script. So even this would work:
<%= Html.IncludeJs("ProductPartialv1")%>
ProductPartialv1.js
I have been using this technique for important JavaScript and CSS changes (CSS is also cached by the browser) - so I update the template to use the newer version and I'm safe in the knowledge that if the new HTML is used, so is the new script and CSS file.
It is "in action" on http://www.the-mag.me.uk/ - where I just increment a numeric suffix on the files.
It turns out IncludeJs is a helper method for automatically including compressed JS files when in release mode: LINK.
So I just have to modify that method a bit to include a version number. Sorry about the confusion.