If I want to add an isEmpty method to all JavaScript arrays, I would use the following code
Array.prototype.isEmpty = function() {
return this.length == 0;
}
Assume this code is in a file foo.js. If I want isEmpty to be available on all pages of a web site, would I need to include foo.js in all the HTML files? In other words, do the prototypes get "reset" whenever the user navigates to a different page?
Thanks,
Don
Yes, you wil need to include your code on each page load.
Think of each page load as a compile/linking cycle. All the various bits of Javascript on the page are linked together1 and then executed as one giant program. The next time a page is loaded, the default Javascript objects start in a fresh state.
1. Linked together in a brain-dead "every piece of code shares the same global namespace" fashion
Yes, you will have to modify the prototype after each page loads
yes, http is stateless so each page is loaded separately.
however adding to Array.prototype isn't a good idea. it means that if you try and loop round it you can get yourself into trouble.
Related
There is a discussion in our team if we should forbid exposing "ElementFinder" and "ElementArrayFinder" in our Page Objects.
The main reason is following quote by Simon Stewart. Page Objects Done Right - selenium conference 2014 (page.7)
If you have a WebDriver APIs in your test methods... You're doing it wrong.
SeleniumHQ/selenium/PageObjects https://github.com/SeleniumHQ/selenium/wiki/PageObjects
The approach is correct for transition functions that returns another Page Object or if multiple selections are happening on one Page so we can return Page and chain these calls.
But when we are doing something really simple there is a lot of boilerplate to write to test that element exist and have text.
Creating those mimic functions of "ElementFinder" does not make much sense to me.
Most of the time its faster and more readable to expose element and use build-in functions of "ElementFinder" like ".getText()". Do you think its better make element private and expose only "getElementText()" function?
What is best practice do you forbid to expose "ElementFinder" and "ElementArrayFinder" in Page Objects?
We use cucumber so tests are divided between
features
step definitions
helpers
page files
All the logic and asserts are in step definitions and helpers. The step definitions invoke page methods when a screen value is needed. The page methods return POJOs. All details about finding elements are encapsulated in the page files. The reasons are obvious; when (not if) the page HTML changes you only have to make the fix in one place. Once you break encapsulation you have the start of a maintenance nightmare.
A pattern that I often use is to create a helper class e.g. OfficeInfo, to contain, say, all the td's in a tr for a table of offices. The page method would return
List<OfficeInfo>
i.e. one list element for each tr. The office information is now decoupled from the details of how that information is displayed on the page. If new td's are added you update class OfficeInfo, update the page method and insert the new step definition without impacting all the other places that OfficeInfo (but not the new td) is used.
I am creating a userscript for a game that will modify certain parts of a page in real time to help the user know how long they must wait to perform certain actions.
The problem I am running into is that the game has some AJAX already built in, every three seconds it calls the jQuery.getJSON() function to grab information to update things. My script needs to make it appear to the end user as if the page was updating in real time, rather than every 3 seconds. As well as add extra information. Without adding extra requests (the games owners will not like that).
To do this I need to override the default behavior of the page, I need to change the callback function of the jQuery.getJSON() call to add my functionality. Or at least disable it completely so I can write a new one. And it isn't as easy as assigning a new function to the old name, as it has no name, they just build the function within the jQuery.getJSON() call. Is this possible?
The page script is contained in a separate .js file btw, if that makes any difference.
If the jQuery.getJSON() call is assigned to a variable, it will return a jqXHR object, which you can then modify by adding or changing its callbacks.
If it is not exposed as a variable, but instead is simply called like so
... js blah ...
jQuery.getJSON("myurl",function(){
more blah
});
... more blah ...
... then I believe you're up a creek without a paddle, as that becomes an anonymous function call with no handle. The only way, at that point, would be to try to override by loading another script in place over the first one, but I am really uncertain how stable that would leave the browser environment.
See the jQuery reference for http://api.jquery.com/jQuery.getJSON/ and http://api.jquery.com/Types/#jqXHR for more details on how the $.ajax() system works.
I have a page that could contain a different inner page at any specific time.
Each inner page needs a specific js file, that is being loaded dynamically using the Headjs.
To avoid collisions (of methods and object names), I would like to unload the old js file before loading a new one.
Does anyone know how to do it, or if it is even possible? Thanks!
No. Theoretically there's nothing like "unload" javascript file. Once its loaded its there all the time.
But there might be other tricks to avoid "collision", mainly clean code. some examples for your case would be
1- Usage of namespaces
2- avoid global variables
3- define everything within a scope and understand scopes
4- Use understandable descriptive variable names, avoid variables named s,i,j, etc.. unless you are used to that and know what're doing. Also be aware since javascript files are loaded when a page is requested, so it causes extra traffic to use huge large names for variables and classes.
Lets say you have functions with same name but live in different scopes/namespace
Example:
var myclass;
if (something) myclass = Obj1;
else if (somethingelse) myclass = Obj2;
myclass.func();
so here you go, two functions with the same name, but live in different classes.and so you can switch between different implementations
Hope this helps
I am currently making a sort of web app and part of it works by dynamically loading and adding js scripts to the page. For this I am using JQuery's $.getScript() method which loads it in.
I have it set as cached.
With my program if a script already exists it is loaded again anyway, from what appears to be cache. What I am wondering though is how much would this effect the site and performance. Does a newly loaded script that has the same src as an existing one overwrite the previous one or is the new one added alongside the old one?
Further more as my site is an AJAX site its possible for several scripts from different pages to eventually be loaded up over time. Is there any browser restrictions on how many scripts one can have loaded?
It will affect site performance. Even if your script is cached on the client with expiration set the browser still needs to parse and execute newly included script. More than that, there's a very good chance you will run into javascript errors because you scripts will override variables already set by previous version. JavaScript parsing and executing is still a blocking operation in all browsers, so while your file is being processed your UI will lock up.
To answer second part of the question, as far as I know there's no limit of number of javascript files on a given page. I've seen pages with over 200 javascripts that didn't throw any exceptions.
I think Ilya has provided some great points to consider, and I think that answers your specific question. If what you're trying to accomplish, however, is rerunning a $(document).ready() handler, you could do:
(function(){
var myreadyfunction = function(){
$('#affected').toggleClass('blue');
};
$(document).ready(myreadyfunction);
$(document).ready(function(){
$('button').click(myreadyfunction);
});
})();
http://jsfiddle.net/Z97cm/
I've scoped it into an anonymous (function(){})(); to keep out of the global scope, so you might want to consider that if you need to access that function from outside that scope. But that gives you the general idea of what I was talking about.
So in my page I have some little scripts which I dont really need to load once you visit the site and in fact the user might not need them at all in their entire session.
Also, according to this: http://code.google.com/speed/page-speed/docs/payload.html#DeferLoadingJS its not a good practise either.
So for example, currently I have everything in 'When dom ready':
$(function() {
// most of the code of which is not needed
});
If I dont place the code inside the Dom ready, its not executable at most of the times. So I thought of doing seperate functions for each snippet.
For exmaple:
function snippet1() {
// Code here
}
and then when I need that snippet, I load it when needed with mouseclick. (Not always a mouselcick, depends what I need to load).
For example:
$('#button1').click(function() {
snippet1();
});
So my question is: Is this the way of loading functions async so you reduce the page load time or is there a better way? I havent read this anywhere my examples, I just thought of it.
Note that I am aware of the asynch loading's but that is not my option here, since I could combine all the functions in just one js file which will be cached, so page load time will be less than loading every time asynch js files.
You're mixing several things:
Page load time
JavaScript parsing time - After the script is loaded, it has to be parsed (error checking, compiling to byte code, etc)
Function execution time
You can't do much about the page load time since you don't want to split the script. You may consider to split it into two parts: One which you always need and an "optional" part which is rarely needed. Load the rare functions in the background.
Note that page load times become pretty moot after the site has been visited once and you've made sure the browser can cache the files.
If you want to reduce parse times, you have two options:
Don't load parts that you don't need.
Compress the scripts. Google has a great tool for that: the Closure Compiler. Besides making your scripts faster, it will also check for many common mistakes.
The last part is the execution times. These are only relevant if the functions are called at all and when they do a lot. In your case, I guess you can ignore this point.
Yes, as much as possible you should define objects, functions, etc. outside of the document.ready wrapper. Some devs will define absolutely everything outside the wrapper and then just call an init() function inside the wrapper to load everything else. I am one such dev.
As for async, this doesn't do true async loading, but it speeds up your page since there is much less work to do on page load.
In general, if you're not using a script loader like requireJS or yepnope, it's a good idea to put all your script references – or at least those that don't need to be run instantly – at the end of your body so the page renders before the resources that aren't going to be run until after page load anyway.
I would load all additional resources using RequireJS ( http://requirejs.org/ ) or similar library.
Put everything that you don't need immediately to separate script and load it after main content is loaded.