I have scripts that needs to wait for certain conditions to be met before they run - for example wait for another script to be loaded, or wait for a data object to be created.
How can I manage such dependencies? The only way I can think of is to use setTimeout to loop in short intervals and check the existence of functions or objects. Is there a better way?
And if setTimeout is the only choice, what is a reasonable time interval to poll my page? 50 ms, 100 ms?
[Edit] some of my scripts collect data, either from the page itself or from Web services, sometimes from a combination of multiple sources. The data can be ready anytime, either before or after the page has loaded. Other scripts render the data (for example to build charts).
[update] thanks for the useful answers. I agree that I shouldn't reinvent the wheel, but if I use a library, at least I'd like to understand the logic behind (is it just a fancy timeout?) to try and anticipate the performance impact on my page.
You could have a function call like loaded(xyz); at the end of the scripts that are being loaded. This function would be defined elsewhere and set up to call registered callbacks based on the value of xyz. xyzcan be anything, a simple string to identify the script, or a complex object or function or whatever.
Or just use jQuery.getScript(url [, success(data, textStatus)] ).
For scripts that have dependencies on each other, use a module system like RequireJS.
For loading data remotely, use a callback, e.g.
$.get("/some/data", "json").then(function (data) {
// now i've got my data; no polling needed.
});
Here's an example of these two in combination:
// renderer.js
define(function (require, exports, module) {
exports.render = function (data, element) {
// obviously more sophisticated in the real world.
element.innerText = JSON.stringify(data);
};
});
// main.js
define(function (require, exports, module) {
var renderer = require("./renderer");
$(function () {
var elToRenderInto = document.getElementById("#render-here");
$("#fetch-and-render-button").on("click", function () {
$.get("/some/data", "json").then(function (data) {
renderer.render(data, elToRenderTo);
});
});
});
});
There are many frameworks for this kind of thing.
I'm using Backbone at the moment http://documentcloud.github.com/backbone/
Friends have also recommended knockout.js http://knockoutjs.com/
Both of these use an MVC pattern to update views once data has been loaded by a model
[update] I think at their most basic level these libraries are using callback functions and event listeners to update the various parts of the page.
e.g.
model1.loadData = function(){
$.get('http://example.com/model1', function(response){
this.save(response);
this.emit('change');
});
}
model1.bind('change',view1.update);
model1.bind('change',view2.update);
I've used pxLoader, a JavaScript Preloader, which works pretty well. It uses 100ms polling by default.
I wouldn't bother reinventing the wheel here unless you need something really custom, so give that (or any JavaScript preloader library really) a look.
Related
These days I find myself putting a lot of my code in the $(document).ready() which does not seem clean to me. For example, if I am creating something that will query my database via ajax and return it and append it to my list i would do something like this:
$(function(){
//Initialize my DOM elements
$MyList = $("#MyList"),
$MyButton = $("#MyButton");
//Add my Click event
$MyButton.click(function(){
$.ajax({
type: 'POST',
url: "/lists/getmylist",
contentType: "application/json",
success: function(results){
//Parse my results and append them using my favorite templating helper
var jsonResults = $.parseJSON(result);
$MyList.mustache("my-template", jsonResults);
}
});
})
});
Now I know this is a small example but it starts to get really big and messy when I have multiple click events, ajax requests etc. It all ends up going in my document ready. I know that I can possibly put all my ajax requests in an external javascript file to help make it cleaner, but is this architecture in general ok? just seems like its really messy. I have seen others use plugin architectures or init functions. I usually have this document ready at the bottom of all my pages and just throw in whatever is necessary to make my page work correctly. Is this a good way to structure my js?
I think the addition of some Model objects and general object oriented programming principals might go a long way here. If you break your your data fetching and storing out into model classes it should help a lot.
Here are some links that should get you started thinking about OO with Javascript.
Writing Object-Oriented JavaScript
Javascript Design Patterns
Javascript: prototypal inheritance
Javascript: prototypal inheritance 2
Another thing that might help out would be to break the Javascript into multiple files. One for global scripts that might be included via a header that attaches to all your pages and a script for each of your pages that requires it.
Perhaps Backbone.js ( or one of the other frameworks ) could be part of the rescue you are looking for.
I found Backbone immensely helpful organising some inherited spaghetti. Your starting point might be to transition your massive document ready into a backbone view (or multiples of)
Organise your scripts by separating out the views, collections, models into individual files then bundle and minify them together into a single file so the browser only needs to make one request instead of many.
ASP.NET MVC4 can do the bundling for you, it also works similarly on MVC3
This is just a example of simple starting point, there are more advanced techniques (eg. AMD, require.js) to reduce the script size per page, but with caching and gzip I find that the single everything script bundle is fine for a lot of cases.
As for your example, here's a possible backbone implementation
Remember to namespace out your code...
var app = app || {};
$(function ($) {
// depending on your server setup you might be able to just override the url
// and get away with what you want. Otherwise you might look into overriding
// the save/fetch or sync methods of the collection
app.MyListCollection = Backbone.Collection.extend({
url: '/lists/getmylist'
});
app.MyListView = Backbone.View.extend({
//bind the view to the existing element in the HTML.
el: '#MyList',
// your mustache template
template:$('#list-template').html(),
// hook up your event handlers to page components
//(within the context of your el)
events: {
'click #MyButton': 'onMyButtonClick'
},
//called on object creation.
initialize: function () {
//you could create a collection here or pass it into the init function
this.collection = new app.MyListCollection();
//when the collection is changes, call the render method
this.listenTo(this.collection, 'reset', this.render);
},
// render is named by convention, but its where you would "redraw" your view and apply your template
render: function () {
this.$el.html(
Mustache.render(
this.template(this.collection.toJSON())));
return this;
},
//your click handler
onMyButtonClick: function(e){
this.collection.fetch();
}
});
});
use your doc ready to spin up whatever backbone functionality you need
and use it bootstrap your javascript with any server side data that you may have.
$(function () {
// create an instance of your view
new app.MyListView();
//example bootstrap using razor
app.title = #Model.Title;
});
I am an ActionScript 3 developer who is just making his first way in building a large-scale JavaScript app.
So I understand modules and understand that AMD is a good pattern to use. I read about RequireJS and implemented it. However, what I still don't understand is how to achieve Cross-Module communication. I understand that there should be some kind of mediator...
I read articles and posts and still couldn't understand how to implement it simply.
Here is my code, simplified:
main.js
require(["Player", "AssetsManager"], function (player, manager) {
player.loadXML();
});
Player.js
define(function () {
function parseXml(xml)
{
// NOW HERE IS THE PROBLEM -- how do I call AssetsManager from here???
AssetsManager.queueDownload($(xml).find("prop").text());
}
return {
loadXML: function () {
//FUNCTION TO LOAD THE XML HERE, WHEN LOADED CALL parseXml(xml)
}
}
});
AssetsManager.js
define(function () {
var arrDownloadQueue = [];
return {
queueDownload: function(path) {
arrDownloadQueue.push(path);
}
}
});
Any "for dummies" help will be appreciated :)
Thank you.
To load up modules from another modules that you define(), you would simply set the first parameter as an array, with your module names in it. So let's say, in your code, you wanted to load Player.js into AssetsManager.js, you would simply include the string Player in the array.
This is simply possible because define's abstract implementation is equivalent to require, only that the callback passed to define expects a value to be returned, and that it will add a "module" to a list of dependencies that you can load up.
AssetsManager.js
define(['Player'], function (player) {
//... Your code.
});
However, if I can add to it, I personally prefer the use of require inside of the callback passed to define to grab the dependency that you want to load, instead of passing parameter to the callback.
So here's my suggestion:
define(['Player'], function () {
var player = require('Player');
});
And this is because it's much more in tune with CommonJS.
And this is how main.js would look like formatted to be more CommonJS-friendly:
require(["Player", "AssetsManager"], function () {
var player = require('Player');
var manager = require('AssetsManager');
player.loadXML();
});
But the CommonJS way of doing things is just a personal preference. My rationale for it is that the order in which you input the dependency names in the array might change at any time, and i wouldn't want to have to step through both the array and the parameters list.
Another rationale of mine (though, it's just pedantic), is that I come from the world of Node.js, where modules are loaded via require().
But it's up to you.
(This would be a reply to skizeey's answer, but I don't have enough reputation for that)
Another way of solving this problem without pulling in Player's AssetManager dependency via require is to pass the AssetManager instance that main.js already has around. One way of accomplishing this might be to make Player's loadXML function accept an AssetManager parameter that then gets passed to parseXml, which then uses it. Another way might be for Player to have a variable to hold an AssetManager which gets read by parseXml. It could be set directly or a function to store an AssetManager in the variable could be used, called say, setAssetManager. This latter way has an extra consideration though - you then need to handle the case of that variable not being set before calling loadXml. This concept is generally called "dependency injection".
To be clear I'm not advising this over using AMD to load it in. I just wanted to provide you with more options; perhaps this technique may come in handier for you when solving another problem, or may help somebody else. :)
I have lots of functions and event handlers that are split across multiple javascript files which are included on different pages throughout my site.
For performance reasons I want to combine all of those files into 1 file that is global across the site.
The problem is I will have event handlers called on elements that won't necessarily exist and same function names.
This is an example of a typical javascript file...
$(document).ready(function(){
$('#blah').keypress(function(e){
if (e.which == 13) {
checkMap();
return false;
}
});
});
function checkMap() {
// code
}
function loadMap() {
// code
}
I would need to seperate this code into an object that is called on that specific page.
My thoughts are I could re-write it like this:
(function($) {
$.homepage = {
checkMap: function(){
// code
},
loadMap: function(){
//code
}
};
})(jQuery);
And then on the page that requires it I could call $.homepage.checkMap() etc.
But then how would I declare event handlers like document.ready without containing it in it's own function?
First of all: Depending on how much code you have, you should consider, if serving all your code in one file is really a good idea. It's okay to save http-requests, but if you load a huge chunk of code, from which you use 5% on a single page, you might be better of by keeping those js files separated (especially in mobile environments!).
Remember, you can let the browser cache those files. Depending on how frequent your code changes, and how much of the source changes, you might want to separate your code into stable core-functionality and additional .js packages for special purposes. This way you might be better off traffic- and maintainance-wise.
Encapsulating your functions into different objects is a good idea to prevent unnecessary function-hoisting and global namespace pollution.
Finally you can prevent calling needless event handlers by either:
Introducing some kind of pagetype which helps you decide calling only the necessary functions.
or
checking for the existence of certain elements like this if( $("specialelement").length > 0 ){ callhandlers}
to speed up your JS, you could use the Google Closure Compiler. It minifies and optimizes your code.
I think that all you need is a namespace for you application. A namespace is a simple JSON object that could look like this:
var myApp = {
homepage : {
showHeader : function(){},
hideHeader : function(){},
animationDelay : 3400,
start : function(){} // the function that start the entire homepage logic
},
about : {
....
}
}
You can split it in more files:
MyApp will contain the myApp = { } object, maybe with some useful utilities like object.create or what have you.
Homepage.js will contain myApp.homepage = { ... } with all the methods of your homepage page.
The list goes on and on with the rest of the pages.
Think of it as packages. You don't need to use $ as the main object.
<script src="myapp.js"></script>
<script src="homepage.js"></script>
<-....->
<script>
myApp.homepage.start();
</script>
Would be the way I would use the homepage object.
When compressing with YUI, you should have:
<script src="scripts.min.js"></script>
<script>
myApp.homepage.start();
</script>
Just to make sure I've understood you correctly, you have one js file with all your code, but you want to still be in control of what is executed on a certain page?
If that is the case, then the Terrific JS framework could interest you. It allows you to apply javascript functionality to a module. A module is a component on your webpage, like the navigation, header, a currency converter. Terrific JS scans the dom and executes the js for the modules it finds so you don't have to worry about execution. Terrific JS requires OOCSS naming conventions to identify modules. It's no quick solution to your problem but it will help if you're willing to take the time. Here are some more links you may find useful:
Hello World Example:
http://jsfiddle.net/brunschgi/uzjSM/
Blogpost on using:
http://thomas.junghans.co.za/blog/2011/10/14/using-terrificjs-in-your-website/
I would use something like YUI compressor to merge all files into one min.js file that is minified. If you are looking for performance both merging and minifiying is the way to go. http://developer.yahoo.com/yui/compressor/
Example:
Javascript input files: jquery.js, ads.js support.js
run yui with jquery.js, ads.js, support.js output it into min.js
Javascript output files: min.js
then use min.js in your html code.
i've been playing with MVC for a while now, but since the project i'm on is starting to get wind in its sails more and more people are added to it. Since i'm in charge of hacking around to find out some "best practice", i'm especially wary about the possible misuses of javascript and would like to find out what would be the best way to have our views and partial views play nicely with javascript.
For the moment, we're having code that looks like this (only simplified for example's sake)
<script type="text/javascript">
function DisableInputsForSubmit() {
if ($('#IsDisabled').is(':checked')) {
$('#Parameters :input').attr('disabled', true);
} else {
$('#Parameters :input').removeAttr('disabled');
}
}
</script>
<%=Html.SubmitButton("submit", Html.ResourceText("submit"), New With {.class = "button", .onclick = "DisableInputsForSubmit(); if ($('#EditParameters').validate().form()) {SetContentArea(GetHtmlDisplay('SaveParameters', 'Area', 'Controller'), $('#Parameters').serialize());} return false;"})%><%=Html.ResourceIcon("Save")%>
Here, we're saving a form and posting it to the server, but we disable inputs we don't want to validate if a checkbox is checked.
a bit of context
Please ignore the Html.Resource* bits, it's the resource management
helpers
The SetContentArea method wraps ajax calls, and GetHtmlDisplay
resolves url regarding an area,
controller and action
We've got combres installed that takes care of compressing, minifying
and serving third-parties libraries and what i've clearly identified as reusable javascript
My problem is that if somebody else defines a function DisableInputsForSubmit at another level (let's say the master page, or in another javascript file), problems may arise.
Lots of videos on the web (Resig on the design of jQuery, or Douglas Crockford for his talk at Google about the good parts of javascript) talk about using the namespaces in your libraries/frameworks.
So far so good, but in this case, it looks a bit overkill. What is the recommended way to go? Should i:
Create a whole framework inside a namespace, and reference it globally in the application? Looks like a lot of work for something so tiny as this method
Create a skeleton framework, and use local javascript in my views/partials, eventually promoting parts of the inline javascript to framework status, depending on the usage we have? In this case, how can i cleanly isolate the inline javascript from other views/partials?
Don't worry and rely on UI testing to catch the problem if it ever happens?
As a matter of fact, i think that even the JS code i've written that is in a separate file will benefit from your answers :)
As a matter of safety/best practice, you should always use the module pattern. If you also use event handlers rather than shoving javascript into the onclick attribute, you don't have to worry about naming conflicts and your js is easier to read:
<script type="text/javascript">
(function() {
// your button selector may be different
$("input[type='submit'].button").click(function(ev) {
DisableInputsForSubmit();
if ($('#EditParameters').validate().form()) {
SetContentArea(GetHtmlDisplay('SaveParameters', 'Area','Controller'), $('#Parameters').serialize());
}
ev.preventDefault();
});
function DisableInputsForSubmit() {
if ($('#IsDisabled').is(':checked')) {
$('#Parameters :input').attr('disabled', true);
} else {
$('#Parameters :input').removeAttr('disabled');
}
}
})();
</script>
This is trivially easy to extract into an external file if you decide to.
Edit in response to comment:
To make a function re-usable, I would just use a namespace, yes. Something like this:
(function() {
MyNS = MyNS || {};
MyNS.DisableInputsForSubmit = function() {
//yada yada
}
})();
I'm in the middle of building a web app with heavy use of jQuery plugins and lots of bindings.
The backend was developed with a template system which only allows (as of now) to place all scripts in that one HTML file. We will use YUI compressor to merge all these into one.
Now, for bindings, how bad is it to have binds in an HTML file (which now is a template for the whole site) for elements that may not be present on a particular page?
Any advice is greatly appreciated
I've been using Paul Irish's markup-based solution pretty extensively on larger sites.
One of the biggest problems with doing this is one of performance - the selector will be evaluated and the DOM searched for each binding not intended for a specific page. At the very least, perhaps set up an object literal to run appropriate ready binding code based on a page identifier, which could be the window.location.href or a substring of. Something like
// avoid global pollution!
(function() {
var pages = {
pageX : {
ready: function() { /* code to run on ready */ },
teardown: function() { /* code to run on teardown */ }
},
pageY : {
ready: function() { /* code to run on ready */ },
teardown: function() { /* code to run on teardown */ }
},
}
// set up ready event handler
$(ready);
// handler function to execute when ready event raised
// Note: Access to pages through closure
function ready() {
var location = window.location.href;
pages[location].ready();
}
})();
Be careful with your selectors if you've got some large pages. For example, if you've got some pages with big, but inert (no bindings) tables, but other pages where tables are small but have controls in them, you probably don't want to do this:
$('td.bindMe').bind('whatever', function() { ... });
(Set aside the live() issue here; sometimes you need to do element-by-element work and that's what I'm talking about.) The problem is that Sizzle will have to look through all the td elements on the page, potentially. Instead, you can put some sort of "marker" container around things like the "active" table with controls, and work it that way:
$('table#withControls').find('td.bindMe').bind(/* ... */);
That way Sizzle only needs to figure out that there's no table called "withControls", and then it's done.
Biggest problem for using all bindings on all pages is that you can get bindings that you did not intended to have, causing troubles...
And of course you will have some performance issues in the page load, but if that is a problem is of course depending on how many bindings you have and how the code looks like.
You might lose some performance on the client side (parsing the file, executing the document-ready handler), but it improves caching on the client (i.e. the file doesn't need to be transferred more than once). That saves server lookups as well. I think this is rather an advantage than a disadvantage as long as you can ensure you're not accidentally modifying objects.
I think the selector engine is fast enough that you, or anyone else, shouldn't notice a difference.
Obviously this is not a "best practice," but if you're binding to ID's and classnames and you won't have any conflicts or unintended bindings then I don't see the harm.