RequireJS define Browser Race Condition - javascript

We have a client-side plug-in framework that is constructed of modules (AMD) and utilizes require.js. In this framework we expose a public object that consists of configuration properties and common framework functionality. All of the required functionality for the public object is contained in one file (albeit separated into modules); the only file required by the end-user to add to their page.
The issue we are seeing is most prevalent in Safari but also shows itself occasionally in IE and Chrome. 100% of the time in Safari with an empty cache we encounter a race condition. Consider this example client code which is in the body of the client’s page.
<script type=”text/javascript”>
Me.subscribe(‘someEvent’, someHandler);
</script>
‘Me’ is always available to the page as its global and outside of any define call. However, ‘Me.subscribe’ is wrapped in ‘define’ and results in ‘undefined’ with the conditions I stated above.
We can’t tell the client to use any third-party frameworks to work around this issue. The code block above must stay exactly as it is.
I’ve been playing with the idea of allowing certain public function binding to be deferred without any additional work required by the client. So far, this is what I’m considering adding to the framework:
Me.deferred = function (fn, name) {
if (fn) return fn;
fn = this;
return function () {
var args = Array.prototype.slice.call(arguments);
setTimeout(function () {
fn[name].apply(this, args);
}, 0);
};
};
Then, in the framework near the top, I can add items I want deferred like this:
Me.subscribe = Me.deferred(Me.subscribe,'subscribe');
My questions are these: Am I missing something that is already out there? Is there an existing pattern that I am not aware of to handle this exact case? Is this just a bad idea in general?

If possible, make sure the client puts requireJS and all dependencies in the head. 'Me' can include an on-demand call which executes on creation if that is not possible.

Related

Javascript code organization data driven application

I'm currently working on the front-end of a medium/large-scale data-driven Asp.net MVC application and I have some doubts about the right code-organization/design pattern to follow.
The web application is made by multiple pages containing many Kendo UI MVC widgets defined with Razor template.
For those who are unfamiliar with Kendo, the razor syntax is translated to Javascript as the following snippet:
I defined inside my Script folder two main folders, and I structured my js files as follow:
shared //Contains the shared js files
-file1.js
-file2.js
pages //One file per page
page1.js
page2.js
...
Ticket.js // page 4 :)
Each js file is a separate module defined with the following pattern:
Note: Inside init function is registered every callback function to the window events and occasionally a $(document).ready(function(){}) block.
;(function () {
"use strict";
function Ticket(settings) {
this.currentPageUrls = settings.currentPageUrls;
this.currentPageMessages = settings.currentPageMessages;
this.currentPageEnums = settings.currentPageEnums;
this.currentPageParameters = settings.currentPageParameters;
this.gridManager = new window.gridManager(); //usage of shared modules
this.init();
}
Ticket.prototype.init = function () {
$("form").on("submit", function () {
$(".window-content-sandbox").addClass("k-loading");
});
...
}
Ticket.prototype.onRequestStart = function (e) {
...
}
//private functions definition
function private(a, b, c){
}
window.Ticket = Ticket;
}());
Once I need my Javascript functions defined in a module I include the associated Javascript file in the page.
An istance of my object is stored inside a variable and, on top of that, a function is bound to the widget event (see: onRequestStart).
HTML/JAVASCRIPT
#(Html.Kendo().DropDownList()
.Name("Users")
.DataValueField("Id")
.DataTextField("Username")
.DataSource(d => d.Read(r => r.Action("UsersAsJson", "User"))
.Events(e => e.RequestStart("onRequestStart"))))
var settings = {};
var ticket = new window.Ticket(settings);
function onRequestStart(e){
ticket.onRequestStart(e);
}
I feel like my design pattern might be unfriendly to other front-end delevoper as I am, mostly because I choose not to implement the Javascript modules within Jquery plugin.
First, Am I doing everything the wrong way?
Second, is my design pattern suitable for a Javascript test-framework?
Third, which are the must-have scenarios for Jquery plugins?
Update
Added the Javascript output by the above Razor syntax.
Folder structure
In terms of functionality (shared) and modules (modular approach), the development or application code should represent what you can encounter in HTML. A simple ctrl+f over your solution should yield all possible changes. From that experience over the years I personally prefer dividing it in:
app (application code)
classes (reusable)
modules (singleton)
lib (package manager/grunt/gulp/...)
jquery (proper library names/unminified dist file or root file)
kendo
File names
Representing what something does and to be able to reuse it in a blink of an eye is what will cut your development time. Choosing proper names has value as I'm sure you are aware. My file names always starts with the namespace usually in short followed by a reusable "search" term:
app/prototypes
ns.calendar.js (multiple configs)
ns.maps.js (combinations or single uses)
ns.places.js (forms or map add-ons)
ns.validation.js (multiple forms and general handling)
app/singletons
ns.cookiebox.js (single config)
ns.socialmedia.js (single config)
ns.dom.js (provides a place for dom corrections, global resize events, small widgets, ...)
To add, what you called shared, is functionality that's meant to be global. A great example would be to use underscore library. Or create a collection of functions (device detection, throttle, helpers in general) on your own to reuse throughout projects => ns.fn.js
Since you add them only once throughout your namespace, it's also built as singleton and can be added to the modules folder or directly in the app root.
As last addition a loader file to kickstart your point of control => ns.load.js in the app root. This file holds the single DOM ready event to bind protoypes and modules.
So you might want to rethink your idea of dividing into pages. Trust me, I've been there. At some point you'll notice how functionality grows too large in order to configure all pages separately and therefor repeatedly.
File structure
To be honest I like Tip 1 of #TxRegex answer the most, with a small addition to bind the namespace and pass it from file to file as it get's loaded.
Core principle: IIFE bound to window object
window.NameSpace = (function($, ns){
'strict'
function private(){}
var x;
ns.SearchTerm = {};
return ns;
}(window.jQuery, window.NameSpace || {}));
For more example code I'd like to point out my github account.
Bundling
Try to achieve a single bundled and minified file from lib to app, loaded in the head on async for production releases. Use separated and unminified script files on defer for development and debug purposes. You must avoid inline script with global dependencies throughout the whole project if you do this.
path to js/lib/**/*.js (usually separated to keep sequential order)
path to js/app/ns.load.js
path to js/app/ns.fn.js
path to js/app/**/*.js (auto update the bundle)
Output => ns.bundle.js
=> ns.bundle.min.js
This way you'll avoid render blocking issues in JavaScript and speed up the loading process which in turn boosts SEO. Also enables you to combine functionality for mobile layouts and desktop layouts on the fly without memory issues or jerky behavior. Minifies really well and generates little overhead in calling instances from the loader file. As a single bundle will be cached throughout your pages it all depends on how many dependencies or libraries you can cut from the bundle. Ideally for medium and large projects where code can be shared and plugged in to different projects.
More info on this in another post.
Conclusion
First, Am I doing everything the wrong way?
Not at all, your modular approach seems ok...
It's missing a global namespace, which is hard to avoid without at least one. You create one for each module but it seems better to group them all under one namespace so you can differentiate library code from application code in the window object.
Kendo seems to create inline scripts? Can't you counter the placement server side?
Second, is my design pattern suitable for a Javascript test-framework?
Except for the Kendo instances, you can add a layer for testing purposes. Remember if jQuery is your dependency inline, you'll have to render block it's loading. Otherwise => jQuery is undefined
Exclude Kendo dependencies from the bundle if you can't control the inline script. Move to a </body> bundled solution.
Third, which are the must-have scenarios for Jquery plugins?
modular approach
configurable approach for multiple instances (tip: moving all strings from your logic, see how Kendo uses object literals)
package manager to separate the "junk" from the "gold"
grunt/gulp/... setup to separate scss and css from js
try to achieve a data-attribute binding, so once all is written, you configure new instances through HTML.
Write once, adapt easily where necessary and configure plenty!
The organization and pattern seems fine, but I have some tips:
Tip 1:
Instead of setting specific global variables within your module, perhaps you could return the object instead. So instead of doing this:
;(function () {
"use strict";
function Ticket(settings) {
console.log("ticket created", settings);
}
...
window.Ticket = Ticket;
}());
You would do this:
;window.Ticket = (function () {
"use strict";
function Ticket(settings) {
console.log("ticket created", settings);
}
...
return Ticket;
}());
The reason for this is to be able to take your module code and give it a different global variable name if needed. If there is a name conflict, you can rename it to MyTicket or whatever without actually changing the module's internal code.
Tip 2:
Forget Tip 1, global variables stink. Instead of creating a seperate global variable for each object type, why not create an object manager and use a single global variable to manage all your objects:
window.myCompany = (function () {
function ObjectManager(modules) {
this.modules = modules || {};
}
ObjectManager.prototype.getInstance = function(type, settings) {
if (!type || !this.modules.hasOwnProperty(type)) {
throw "Unrecognized object type:";
}
return new this.modules[type](settings);
};
ObjectManager.prototype.addObjectType = function(type, object) {
if (!type) {
throw "Type is required";
}
if(!object) {
throw "Object is required";
}
this.modules[type] = object;
};
return new ObjectManager();
}());
Now each of your modules can be managed with this single global object that has your company name attached to it.
;(function () {
"use strict";
function Ticket(settings) {
console.log("ticket created", settings);
}
...
window.myCompany.addObjectType("Ticket", Ticket);
}());
Now you can easily get an instance for every single object type like this:
var settings = {test: true};
var ticket = window.myCompany.getInstance("Ticket", settings);
And you only have one global variable to worry about.
You can try separating your files in different components asuming each component has a folder.
for example: page 1 is about rectangles so you make a folder call rectangle inside that folder you create 3 files rectangle.component.html, rectangle.component.css, rectangle.component.js (optional rectangle.spec.js for testing).
app
└───rectangle
rectangle.component.css
rectangle.component.html
rectangle.component.js
so if anything bad happends to a rectangle you know where is the problem
a good way to isolate variables and execute in the right place is to use a router basically what this does it check at the url and executes the portion of code you asign to that page
hope it helps let me know if you need more help.

Starting with RequireJS, communication between modules

I am an ActionScript 3 developer who is just making his first way in building a large-scale JavaScript app.
So I understand modules and understand that AMD is a good pattern to use. I read about RequireJS and implemented it. However, what I still don't understand is how to achieve Cross-Module communication. I understand that there should be some kind of mediator...
I read articles and posts and still couldn't understand how to implement it simply.
Here is my code, simplified:
main.js
require(["Player", "AssetsManager"], function (player, manager) {
player.loadXML();
});
Player.js
define(function () {
function parseXml(xml)
{
// NOW HERE IS THE PROBLEM -- how do I call AssetsManager from here???
AssetsManager.queueDownload($(xml).find("prop").text());
}
return {
loadXML: function () {
//FUNCTION TO LOAD THE XML HERE, WHEN LOADED CALL parseXml(xml)
}
}
});
AssetsManager.js
define(function () {
var arrDownloadQueue = [];
return {
queueDownload: function(path) {
arrDownloadQueue.push(path);
}
}
});
Any "for dummies" help will be appreciated :)
Thank you.
To load up modules from another modules that you define(), you would simply set the first parameter as an array, with your module names in it. So let's say, in your code, you wanted to load Player.js into AssetsManager.js, you would simply include the string Player in the array.
This is simply possible because define's abstract implementation is equivalent to require, only that the callback passed to define expects a value to be returned, and that it will add a "module" to a list of dependencies that you can load up.
AssetsManager.js
define(['Player'], function (player) {
//... Your code.
});
However, if I can add to it, I personally prefer the use of require inside of the callback passed to define to grab the dependency that you want to load, instead of passing parameter to the callback.
So here's my suggestion:
define(['Player'], function () {
var player = require('Player');
});
And this is because it's much more in tune with CommonJS.
And this is how main.js would look like formatted to be more CommonJS-friendly:
require(["Player", "AssetsManager"], function () {
var player = require('Player');
var manager = require('AssetsManager');
player.loadXML();
});
But the CommonJS way of doing things is just a personal preference. My rationale for it is that the order in which you input the dependency names in the array might change at any time, and i wouldn't want to have to step through both the array and the parameters list.
Another rationale of mine (though, it's just pedantic), is that I come from the world of Node.js, where modules are loaded via require().
But it's up to you.
(This would be a reply to skizeey's answer, but I don't have enough reputation for that)
Another way of solving this problem without pulling in Player's AssetManager dependency via require is to pass the AssetManager instance that main.js already has around. One way of accomplishing this might be to make Player's loadXML function accept an AssetManager parameter that then gets passed to parseXml, which then uses it. Another way might be for Player to have a variable to hold an AssetManager which gets read by parseXml. It could be set directly or a function to store an AssetManager in the variable could be used, called say, setAssetManager. This latter way has an extra consideration though - you then need to handle the case of that variable not being set before calling loadXml. This concept is generally called "dependency injection".
To be clear I'm not advising this over using AMD to load it in. I just wanted to provide you with more options; perhaps this technique may come in handier for you when solving another problem, or may help somebody else. :)

How to manage dependencies in JavaScript?

I have scripts that needs to wait for certain conditions to be met before they run - for example wait for another script to be loaded, or wait for a data object to be created.
How can I manage such dependencies? The only way I can think of is to use setTimeout to loop in short intervals and check the existence of functions or objects. Is there a better way?
And if setTimeout is the only choice, what is a reasonable time interval to poll my page? 50 ms, 100 ms?
[Edit] some of my scripts collect data, either from the page itself or from Web services, sometimes from a combination of multiple sources. The data can be ready anytime, either before or after the page has loaded. Other scripts render the data (for example to build charts).
[update] thanks for the useful answers. I agree that I shouldn't reinvent the wheel, but if I use a library, at least I'd like to understand the logic behind (is it just a fancy timeout?) to try and anticipate the performance impact on my page.
You could have a function call like loaded(xyz); at the end of the scripts that are being loaded. This function would be defined elsewhere and set up to call registered callbacks based on the value of xyz. xyzcan be anything, a simple string to identify the script, or a complex object or function or whatever.
Or just use jQuery.getScript(url [, success(data, textStatus)] ).
For scripts that have dependencies on each other, use a module system like RequireJS.
For loading data remotely, use a callback, e.g.
$.get("/some/data", "json").then(function (data) {
// now i've got my data; no polling needed.
});
Here's an example of these two in combination:
// renderer.js
define(function (require, exports, module) {
exports.render = function (data, element) {
// obviously more sophisticated in the real world.
element.innerText = JSON.stringify(data);
};
});
// main.js
define(function (require, exports, module) {
var renderer = require("./renderer");
$(function () {
var elToRenderInto = document.getElementById("#render-here");
$("#fetch-and-render-button").on("click", function () {
$.get("/some/data", "json").then(function (data) {
renderer.render(data, elToRenderTo);
});
});
});
});
There are many frameworks for this kind of thing.
I'm using Backbone at the moment http://documentcloud.github.com/backbone/
Friends have also recommended knockout.js http://knockoutjs.com/
Both of these use an MVC pattern to update views once data has been loaded by a model
[update] I think at their most basic level these libraries are using callback functions and event listeners to update the various parts of the page.
e.g.
model1.loadData = function(){
$.get('http://example.com/model1', function(response){
this.save(response);
this.emit('change');
});
}
model1.bind('change',view1.update);
model1.bind('change',view2.update);
I've used pxLoader, a JavaScript Preloader, which works pretty well. It uses 100ms polling by default.
I wouldn't bother reinventing the wheel here unless you need something really custom, so give that (or any JavaScript preloader library really) a look.

How to use javascript namespaces correctly in a View / PartialView

i've been playing with MVC for a while now, but since the project i'm on is starting to get wind in its sails more and more people are added to it. Since i'm in charge of hacking around to find out some "best practice", i'm especially wary about the possible misuses of javascript and would like to find out what would be the best way to have our views and partial views play nicely with javascript.
For the moment, we're having code that looks like this (only simplified for example's sake)
<script type="text/javascript">
function DisableInputsForSubmit() {
if ($('#IsDisabled').is(':checked')) {
$('#Parameters :input').attr('disabled', true);
} else {
$('#Parameters :input').removeAttr('disabled');
}
}
</script>
<%=Html.SubmitButton("submit", Html.ResourceText("submit"), New With {.class = "button", .onclick = "DisableInputsForSubmit(); if ($('#EditParameters').validate().form()) {SetContentArea(GetHtmlDisplay('SaveParameters', 'Area', 'Controller'), $('#Parameters').serialize());} return false;"})%><%=Html.ResourceIcon("Save")%>
Here, we're saving a form and posting it to the server, but we disable inputs we don't want to validate if a checkbox is checked.
a bit of context
Please ignore the Html.Resource* bits, it's the resource management
helpers
The SetContentArea method wraps ajax calls, and GetHtmlDisplay
resolves url regarding an area,
controller and action
We've got combres installed that takes care of compressing, minifying
and serving third-parties libraries and what i've clearly identified as reusable javascript
My problem is that if somebody else defines a function DisableInputsForSubmit at another level (let's say the master page, or in another javascript file), problems may arise.
Lots of videos on the web (Resig on the design of jQuery, or Douglas Crockford for his talk at Google about the good parts of javascript) talk about using the namespaces in your libraries/frameworks.
So far so good, but in this case, it looks a bit overkill. What is the recommended way to go? Should i:
Create a whole framework inside a namespace, and reference it globally in the application? Looks like a lot of work for something so tiny as this method
Create a skeleton framework, and use local javascript in my views/partials, eventually promoting parts of the inline javascript to framework status, depending on the usage we have? In this case, how can i cleanly isolate the inline javascript from other views/partials?
Don't worry and rely on UI testing to catch the problem if it ever happens?
As a matter of fact, i think that even the JS code i've written that is in a separate file will benefit from your answers :)
As a matter of safety/best practice, you should always use the module pattern. If you also use event handlers rather than shoving javascript into the onclick attribute, you don't have to worry about naming conflicts and your js is easier to read:
<script type="text/javascript">
(function() {
// your button selector may be different
$("input[type='submit'].button").click(function(ev) {
DisableInputsForSubmit();
if ($('#EditParameters').validate().form()) {
SetContentArea(GetHtmlDisplay('SaveParameters', 'Area','Controller'), $('#Parameters').serialize());
}
ev.preventDefault();
});
function DisableInputsForSubmit() {
if ($('#IsDisabled').is(':checked')) {
$('#Parameters :input').attr('disabled', true);
} else {
$('#Parameters :input').removeAttr('disabled');
}
}
})();
</script>
This is trivially easy to extract into an external file if you decide to.
Edit in response to comment:
To make a function re-usable, I would just use a namespace, yes. Something like this:
(function() {
MyNS = MyNS || {};
MyNS.DisableInputsForSubmit = function() {
//yada yada
}
})();

How to isolate different javascript libraries on the same page?

Suppose we need to embed a widget in third party page. This widget might use jquery for instance so widget carries a jquery library with itself.
Suppose third party page also uses jquery but a different version.
How to prevent clash between them when embedding widgets? jquery.noConflict is not an option because it's required to call this method for the first jquery library which is loaded in the page and this means that third party website should call it. The idea is that third party site should not amend or do anything aside putting tag with a src to the widget in order to use it.
Also this is not the problem with jquery in particular - google closure library (even compiled) might be taken as an example.
What solutions are exist to isolate different javascript libraries aside from obvious iframe?
Maybe loading javascript as string and then eval (by using Function('code to eval'), not the eval('code to eval')) it in anonymous function might do the trick?
Actually, I think jQuery.noConflict is precisely what you want to use. If I understand its implementation correctly, your code should look like this:
(function () {
var my$;
// your copy of the minified jQuery source
my$ = jQuery.noConflict(true);
// your widget code, which should use my$ instead of $
}());
The call to noConflict will restore the global jQuery and $ objects to their former values.
Function(...) makes an eval inside your function, it isn't any better.
Why not use the iframe they provide a default sandboxing for third party content.
And for friendly ones you can share text data, between them and your page, using parent.postMessage for modern browser or the window.name hack for the olders.
I built a library to solve this very problem. I am not sure if it will help you of course, because the code still has to be aware of the problem and use the library in the first place, so it will help only if you are able to change your code to use the library.
The library in question is called Packages JS and can be downloaded and used for free as it is Open Source under a Creative Commons license.
It basically works by packaging code inside functions. From those functions you export those objects you want to expose to other packages. In the consumer packages you import these objects into your local namespace. It doesn't matter if someone else or indeed even you yourself use the same name multiple times because you can resolve the ambiguity.
Here is an example:
(file example/greeting.js)
Package("example.greeting", function() {
// Create a function hello...
function hello() {
return "Hello world!";
};
// ...then export it for use by other packages
Export(hello);
// You need to supply a name for anonymous functions...
Export("goodbye", function() {
return "Goodbye cruel world!";
});
});
(file example/ambiguity.js)
Package("example.ambiguity", function() {
// functions hello and goodbye are also in example.greeting, making it ambiguous which
// one is intended when using the unqualified name.
function hello() {
return "Hello ambiguity!";
};
function goodbye() {
return "Goodbye ambiguity!";
};
// export for use by other packages
Export(hello);
Export(goodbye);
});
(file example/ambiguitytest.js)
Package("example.ambiguitytest", ["example.ambiguity", "example.greeting"], function(hello, log) {
// Which hello did we get? The one from example.ambiguity or from example.greeting?
log().info(hello());
// We will get the first one found, so the one from example.ambiguity in this case.
// Use fully qualified names to resolve any ambiguities.
var goodbye1 = Import("example.greeting.goodbye");
var goodbye2 = Import("example.ambiguity.goodbye");
log().info(goodbye1());
log().info(goodbye2());
});
example/ambiguitytest.js uses two libraries that both export a function goodbye, but it can explicitly import the correct ones and assign them to local aliases to disambiguate between them.
To use jQuery in this way would mean 'packaging' jQuery by wrapping it's code in a call to Package and Exporting the objects that it now exposes to the global scope. It means changing the library a bit which may not be what you want but alas there is no way around that that I can see without resorting to iframes.
I am planning on including 'packaged' versions of popular libraries along in the download and jQuery is definitely on the list, but at the moment I only have a packaged version of Sizzle, jQuery's selector engine.
Instead of looking for methods like no conflict, you can very well call full URL of the Google API on jQuery so that it can work in the application.
<script src="myjquery.min.js"></script>
<script>window.myjQuery = window.jQuery.noConflict();</script>
...
<script src='...'></script> //another widget using an old versioned jquery
<script>
(function($){
//...
//now you can access your own jquery here, without conflict
})(window.myjQuery);
delete window.myjQuery;
</script>
Most important points:
call jQuery.noConflict() method IMMEDIATELY AFTER your own jquery and related plugins tags
store the result jquery to a global variable, with a name that has little chance to conflict or confuse
load your widget using the old versioned jquery;
followed up is your logic codes. using a closure to obtain a private $ for convience. The private $ will not conflict with other jquerys.
You'd better not forget to delete the global temp var.

Categories

Resources