My question: How would one best go about breaking a large, monolithic javascript object literal into multiple, discrete, files?
I have a single javascript file that consists of an object literal with many methods attached to it. It's getting quite long and I want to break it into smaller parts that can more easily be managed.
I've heard I can use AMD or CommonJS to organize things, I've heard I should use RequireJS, that I should use Webpack or Browserify, that I should use any number of other tools/techniques. After looking at these things I am confused as to what the best approach is.
How would you do it? How would you take a single object literal consisting of a few thousands lines of javascript (made up of functions like "search" and "login" and "user") and reorganize it into multiple files that are more easily dealt with by a group of developers? The single, giant file thing is just getting to unwieldy and the options seems to varied and unclear. This is a fairly simple app that uses vanilla JS, a little jQuery and sits on top of a Grails backend.
I think the question is pretty clear but if you really need code to look at here is an example of the sort of object literal I am talking about:
var myObj = {
foo: "one",
bar: "two",
baz: false,
deez: -1,
login: function() {
// lots and lots of code
},
user: function() {
// lots and lots of code
},
beers: function() {
// lots and lots of code
},
varieties: function() {
// lots and lots of code
}
init: function() {
myObj.login.init();
myObj.user.init();
// lots of jQuery document.ready stuff
}
}
myObj.init();
You will a lot of suggestions and approaches to solve your problems, and I can't say any of them are wrong, they are just different.
My approach would be to use ES6 and its native module support.
To accomplish this I always use my own boilerplate named fabric which uses Webpack to compile the modules, Browsersync to help you on your development, Tape for unit testing, SASS for your CSS preprocessing, and Babel to compile a compatible ES5 bundle that you can easily use in your application.
Now, the way to use the ES6 modules is something like this with named exports:
//------ lib.js ------
export const sqrt = Math.sqrt;
export function square(x) {
return x * x;
}
export function diag(x, y) {
return sqrt(square(x) + square(y));
}
//------ main.js ------
import { square, diag } from 'lib';
console.log(square(11)); // 121
console.log(diag(4, 3)); // 5
Or using default exports:
//------ myFunc.js ------
export default function () { ... };
//------ main1.js ------
import myFunc from 'myFunc';
myFunc();
You can learn more about ES6 modules at 2ality
Here's the pattern I use:
When possible, break concepts into their own sub-object
Regardless of sub-objects or not, declare any non-broken-up properties first, then add to it as needed
If the files are across multiple files and you do not wish to use sub-objects per-file, use a temporary object to hold additional properties, and then extend the original.
Sample:
var myObj = {
foo: "one",
bar: "two",
baz: false,
deez: -1
}
myObj.login = function() {
// lots and lots of code
};
myObj.user = function() {
// lots and lots of code
};
myObj.drinks = {
beer: function() {},
wine: function() {},
sunnyDelight: {
drinkIt: function() {},
burp: function() {}
}
};
myObj.init = function() {
myObj.login.init();
myObj.user.init();
// lots of jQuery document.ready stuff
}
myObj.init();
Note that "drinks" is a concept unto itself, containing multiple properties and methods. Your concepts might be something like "ui", "utils", "data" or whatever the role of the contained properties happens to be.
For the extend point I made, there's not much code needed there either
// "utilities.js"
var myObj = {
// a bunch of properties and/or methods
};
myObj.moreStuff = "more stuff!";
and then in another file you have two choices. Either add to the object without overwriting it (you will need the dot-notation to do this):
// "ui.js"
var myObj = myObj || {};
// adds the render object to the existing myObj
myObj.render = {
header: function() {},
dialogBox: function() {}
}
The above works particularly well if you sub-divide your concepts... because you can still have fairly monolithic objects that will not trample over the rest of myObj. But maybe you want to add directly to myObj without trampling and without subdividing concerns:
// "ui.js"
var myObj = myObj || {};
// ultimately, the CONTENTS of this object get merged into the existing myObj
var myObjSupplement = {
header: function() {},
dialogBox: function() {},
heroBiscuit: "A yummy biscuit made from heroes!"
}
// using jQuery here, but it's not the only way to extend an object
$.extend(myObj, myObjSupplement)
I don't see TOO many opportunities to use the above, since myObjSupplement is now in the global namespace and defeats the purpose of limiting additions to the global namespace, but it's there if you need it.
[edited to add: ]
It might not go "without saying" as I thought-- but dividing into many different files probably works best if you have a build process in place that can concatenate them into one file suitable for minifying. You don't want to have 100 or even 6 separate files each requiring a synchronous HTTP call to fetch.
There are more modern and possibly 'better' approaches with technologies like AMD/RequireJS... but if the question is, "how do I divide up an object literal into several files", the above answer I've given is one I can stand behind.
While there are automated ways of doing this I'm sure, and I am also interested in seeing the answers this question gets, I would recommend simply going in and moving the method definitions into different files and calling the functions normally method(param); and linking the files to your html page.
This would serve multiple purposes, including the one you are looking to acheive of breaking your code down into more manageable modules. Among those purposes also include the fact that instead of having those definitions written to memory for every instance of the object, you would only define it once and make references to it whenever you need it.
Sorry I can't be of more help without actually seeing the JavaScript File.
You can reference this stack overflow example if you need more guidance in achieving this.
You don't have to have all of the methods defined in your objects or classes, it's better to modularize these methods into different files and use the <script src="path/to/your/script.js"> </script> tags to include them all with your html/php page
Related
I need to compile my code with closure compiler in ADVANCED mode. I also need to keep prototypes of my objects in my application because I'm looping on Javascript objects prototypes. Trying to get both results in some ReferenceError when starting the application.
When compiling with ADVANCED mode, some prototypes are removed and replaced by a function that is using an object parameter in order to recover "this" keyword. This is due to crossModuleCodeMotionNoStubMethods attribute of CompilerOptions.java.
Example of code before compilation :
function MyClass() = { // Some code }
MyClass.prototype.someFunc = function() { // Some code calling someOtherFunc };
MyClass.prototype.someOtherFunc = function(someParam) { // Some code };
Example of code after compilation :
function MyCompiledClass = { // Some code }
MyCompiledClass.prototype.someCompiledFunc = function() { // Some code calling someOtherFunc }
function someOtherCompiledFunc(that, someParam) = { // Some code }
I first tried to use #this and #preserve JSDoc tags to solve the problem, without success. Using #export is not a solution, because functions will then keep their original names.
I've found two options to solve my problem for now :
Refactor the code as seen here
Build a custom version of Closure Compiler as seen here
Option 1 will need to much modifications in my code and will make it less readable, if it's the only solution, I will have a go for this one.
Option 2 seems to be a nice workaround, but I've read that some changes on CompilationLevel.java may violate some core assumptions of the compiler. Can someone tell me if by modifying setCrossModuleMethodMotion from true to false, will it still respect all core assumptions of the compiler ?
I'm currently building a custom version of the compiler to check if the code is compiling properly, but even if the code is usable, I need to be sure it will be properly obfuscated.
Thank you !
The specific optimization pass you are referring to is DevirtualizePrototypeMethods. The best way to block the optimization would be to use the #nocollapse annotation. It will allow your method to be renamed but not allow it to be removed from the prototype.
I'm not 100% sure it will work for this case, but if it doesn't it should and you can file an issue to have that fixed: https://github.com/google/closure-compiler/issues
You can export constructors and prototype properties in the same way.
For example:
MyClass = function(name) {
this.myName = name;
};
MyClass.prototype.myMethod = function() {
alert(this.myName);
};
window['MyClass'] = MyClass; // <-- Constructor
MyClass.prototype['myMethod'] = MyClass.prototype.myMethod;
As in https://developers.google.com/closure/compiler/docs/api-tutorial3
I am the team lead of a group of ~8 developers. We look after a large website which is split into many 'components' (a component could be a gallery for example - with libraries aside, these 'components' are standalone). We are in the process of splitting things up and part of that task is creating Gulp code to handle the various stages of processing for each of these components (SCSS processing, concatenation, image optimisation etc).
So, I need to come up with a pattern that the team can follow when creating Gulp code for a new 'component'. I need to keep this as simple as possible as many of the developers are new to Gulp. The general idea I want to get to is that we have a base Gulp 'component', which will have all code required to process a standard 'component', but I expect there will be some 'components' that need special gulp code. So, I would like to be able to extend the base Gulp 'component' compilation code in these cases.
In an attempt to learn and to set up a strong foundation for the team have been doing some reading on best approaches to inheritance in JavaScript. I have come across quite a rift in the way people feel about this. What approaches I have considered and what I've gathered:
There are classes in ES6 which I can use in node.js. These classes are shunned by a lot of the big names in the JavaScript world as well as big names from the past (when using classical style languages) for reasons such as it encourages brittle code. Also, you cant do classic style public and private properties/functions, so I struggle to see any real reason why I should go with this.
If I did go this route, I feel I would end up with something like this (code is untested / probably not correct, i'm just dumping my thoughts):
class Component {
constructor(options) {
},
build() {
},
dev() {
}
test() {
},
// Should be private, but wont be
_processStyles() {
},
_processScripts() {
}
}
Factory functions. We're used to using these with the revealing module pattern, and generally I like them. I also believe that Douglas Crockford is a fan of factory functions, so I feel I'm on good ground with this. Now, if I create public and private methods (by returning an object with references only to my public functions) and then I want to extend 'component', in the new factory I would create an instance of 'component' and then extend that. The problem is that I can't override (or even call) the private functions of my 'component' instance, because they are in a different scope that I have no access to. I did read that one way to get around this is to use an object to create a reference to all of the private methods, but then they're not private anymore, so it defeats the object (no pun intended).
var component = function(options) {
var init = function() {
};
var build = function() {
};
var dev = function() {
};
var test = function() {
};
var _processStyles = function() {
};
var _processScripts = function() {
};
return {
init: init,
build: build,
dev: dev,
test: test
};
};
var specialComponent = function(options) {
// Create instance of component
var cmp = component(options);
// Extend it
cmp.extraFunction = function() {
// This will throw an error as this function is not in scope
_processStyles();
}
// Private functions are available if I need them
var _extraPrivateFunction = function() {
}
return cmp;
}
So, I feel like I've missed something somewhere, like I need someone to point me in the right direction. Am I getting too hung up about private functions (feels like it)? Are there better approaches? How would something like this, which seems to lend itself to classical inheritance be best tackled in a DRY (don't repeat yourself) manner?
Thanks in advance,
Alex.
So, I have the need for a singleton. It really is a rather large "do something" object. Processes information etc.. it could be extended, and some methods could or might even be inherited, but overall, there doesn't need to exist more than one of them. So, I read a bit here which I love the concept: http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html
I am thinking more in terms of leveraging the sub module behavior.
But, I'd like to break my obj into sub-modules. But I am not seeing the need to pass in the parent sub-module as the "return" on that parent gives me access anyways. ala. Perhaps I am missing the "robustness" or real usage here.
For example.
var a = {};
a.m = function(){
var conf = {
a: 'aaa',
b: 'bbb'
}
var funcs = {
func1: function(){
console.log('a.m sub object func1');
}
}
return { // doing this gives me access
conf: conf,
funcs: funcs
};
}()
// this sub module obj WILL need some behaviors/methods/vals in a.m
a.anothersub = (function(m){
var anotherSub = m;
anotherSub.funcs.func1(); // access to a.m methods do I even need to pass it in?
a.m.funcs.func1(); // also access to a.m methods
}( a.m || {}))
// is a better approach to extend a.anothersub with a.m?
// jQuery.extend(a.anothersub, a.m);
If both "m" and "anothersub" are part of object 'a'. Is there a need for loose or tight augmentation here and for sake of keeping code compartmentalized and of same function behavior, I am creating these "sub objects".
I read that article and felt I could leverage its power. But not really sure this is the best approach here, or even needed. Your thoughts?
This all comes down to how tightly-coupled your modules/submodules actually are, and how much you can expect them to exist in all places around your application (ie: every page of a site, or at the global level of an application, et cetera).
It's also broaching a couple of different topics.
The first might be the separation of concerns, and another might be dependency-inversion, while another, tied to both, might be code organization/distribution.
Also, it depends on how cohesive two submodules might be...
If you had something like:
game.playerFactory = (function () {
return {
makePlayer : function () { /*...*/ }
};
}());
game.playerManager = (function (factory) { return {/*...*/}; }(game.playerFactory));
It might make sense to have the factory passed into the manager as an argument.
At that point, attaching both to game is really just a convenient place to make both accessible to the global scope.
Calling game from inside of one or the other, however, is problematic, in large systems, systems with lots of submodules, or systems where the interface is still in flux (when are they not?).
// input-manager.js
game.inputManager = (function () {
var jumpKey = game.playerManager.players.player1.inputConfig.jump;
}());
If all of your buttons are mapped out and bound to in that way, for every button for every player, then all of a sudden you've got 40 lines of code that are very tightly bound to:
The global name of game
The module name of playerManager
The module-interface for playerManager (playerManager.players.player1)
The module-interface for player (player.inputConfig.jump)
If any one of those things changes, then the whole submodule breaks.
The only one the input-manager should actually care about is the object that has the .inputConfig interface.
In this case, that's a player object... ...in another case, it might be completely decoupled or stuck on another interface.
You might be half-way through implementing one gigantic module, and realize that it should be six smaller ones.
If you've been passing in your objects, then you really only need to change what you're passing in:
game.inputManager = (function (hasInput) {
var jumpKey = hasInput.inputConfig.jump;
}(game.playerManager.players.player1));
Can easily become
game.inputManager = function (hasInput) {
/*...*/
}(game.playerManager.getPlayer("BobTheFantastic").config));
and only one line of code changed, rather than every line referencing game. ......
The same can be said for the actual global-reference:
// my-awesome-game.js
(function (ns, root) {
root[ns] = { };
}( "MyAwesomeGame", window ));
// player-factory.js
(function (ns, root) {
root[ns] = {
make : function () { /*...*/ }
};
}("playerFactory", MyAwesomeGame));
// player-manager.js
(function (ns, root, player) {
var manager = {
players : [],
addPlayer : function () { manager.players.push(player.make()); }
};
}("playerManager", MyAwesomeGame, MyAwesomeGame.playerManager));
Your code isn't impervious to change, but what you have done is minimize the amount of change that any one submodule needs to make, based on external changes.
This applies directly to augmentation, as well.
If you need to override some piece of some software, in a completely different file, 20,000 lines of code down the page, you don't want to have to suffer the same fate as changing interfaces elsewhere...
(function (override, character) {
override.jump = character.die;
}( MyAwesomeGame.playerManager.get(0), MyAwesomeGame.playerManager.get(1) ));
Now, every time player 1 tries to jump, player 2 dies.
Fantastic.
If the interface for the game changes in the future, only the actual external call has to change, for everything to keep working.
Even better.
I am working on a web project where in the UI jsp pages. All the jquery/javascript methods are called via this pattern
A.b.c.d.methodName()
There are many .js files imported in the jsp page. So I have to search in Eclipse IDE
to track the method js file.
In the js file which has an entirely different name not "A.b.c.d", the method is declared as
methodName: function()
{ // logic }
Can anyone tell me what is this style/pattern of using jquery.
JavaScript never looks for file names, the "namespacing" you see there is achieved by objects nested in each other as properties.
For example if you create an object like:
var A = {
b: {
c: {
d: {
methodName: function () {
console.log('What a nice method!');
}
}
}
}
};
You can call it like this:
A.b.c.d.methodName();
Or you can add methods later in your code:
var irrelevantName = function () {
console.log('This method is even nicer');
};
A.b.c.method2 = irrelevantName;
And call it by:
A.b.c.method2();
There is a much used extend method which has surfaced in lot of JavaScript frameworks, like jQuery or MooTools. This provides a way for safely extending an object while preserving original values if present.
You can use the jQuery one like:
$.extend(A.b.c.d, {
method3: function () {
console.log('An other nice method');
}
});
And as you expect, it can be called as:
A.b.c.d.method3();
JavaScript libraries usually use namespacing: they create some kind of an object and populate it with all their methods. This way they don't pollute the global namespace with their methods.
There are a lot of ways to add new properties to an object in JS, so it is not always obvious how a method is added to an object, but it is safe to say that file names have nothing to do with it.
For further reading on the subject, I would recommend this google search. Basically any of the top 20 results should explain how namespaces are created and used in JavaScript.
On a footnote: I'm not sure how does the Eclipse tooling support JS, but as it is not a trivial problem (object structure can be modified on the fly) I would not be surprised if Eclipse had no understanding of JavaScript namespacing.
Looks like it has some sort of namespacing. The code could be using could be the AMD pattern? But again, if it's JSP it might be old....
Brainstorming needed. I have a problem with Javascript libraries (jQuery, ExtJS etc.) that don't seem to play well along with Javascript Intellisense built in Visual Studio 2008. They provide certain utility helper functions that intellisense fails to understand.
ie. ExtJS code
// convenience function to create namespace object placeholders
Ext.namespace("Root.Sub.Subsub");
or jQuery
// doing the same thing in jQuery
$.extend(window, {
Root: {
Sub: {
Subsub: {}
}
},
});
or even (I pitty thou that shalt maintain this code)
$.extend(window, { Root: {}});
$.extend(Root, { Sub: {}});
$.extend(Root.Sub, { Subsub: {}});
The end result of these calls is basically the same. None of them would make Root namespace visible to Javascript Intellisense in Visual Studio 2008. If we would know how intellisense works under the hood we could probably be able to overcome this situation.
Is it possible to convince Intellisense to display/recognise these namespaces, without writing objects directly like:
Root = {
Sub: {
Subsub: {}
}
};
I admit that the first jQuery call is quite similar to this one, but it's better to use extend functionality to prevent removing/overwriting existing functionality/namespaces.
Question
How should we use these utility functions to make Intellisense work?
Any brainstorming answer that would shed some light on this is welcome?
Edit
I've found out that namespaces created with utility functions are shown if they are defined outside (ie. in a different script file) and you make a reference to that file like:
/// <reference path="different.script.file.js" />
In this case everything's fine. But if you call utility functions within the same file, they're not listed in intellisense drop down list.
As far as jQuery goes: Take a look at this blog post. This post is a good read as well.
I've tried a bunch of stuff to make Visual Studio recognize JavaScript objects and namespaces--the only solution I've found that works reliably is what you've mentioned yourself:
var RootNamespace = {
SubNamespace: {
SubSubNamespace: {}
}
};
Update:
Developer 1 writes:
var RootNamespace = {
SubNamespace: {
SubSubNamespace: {}
}
};
Developer 2 extends:
RootNamespace.SubNamespace.AnotherSubNamespace = {
alertHelloWorld: function ()
{
alert("Hello World!");
}
};
Workaround
These utility methods actually work if you use them in a different script file and reference it in the one you would like to use those namespaces.
File1.js (assumes we have a custom jquery extension $.ns() that registeres new namespaces)
$.ns("Project.Controls", "Project.Pages", "Project.General.Utilities");
...
File2.js
/// <reference path="File1.js" />
// use custom namespaces
Project.Controls.InfoWindow = function(){
...
};
in File2.js we would have complete intellisense support for custom namespaces.
Drawback
We have to create namespaces elsewhere because I can't seem to make it work within the same script file.
VS2008 looeses intelisense even if you declare the object as standard js and then try to extend it:
var opt = {
SomeProperty: 1,
SomeFunction: function(name,age) {}
};
opt = jQuery.extend(true, module.options, jQuery.extend(true, {}, opt, module.options));
op.SomeFunction("John", 20) // doesn't intelisense anymore
In order to get around this we need to move the extending operation on a function:
var opt = {
SomeProperty: 1,
SomeFunction: function(name,age) {}
};
function extendOptions() {
opt = jQuery.extend(true, module.options, jQuery.extend(true, {}, opt, module.options));
}
extendOptions();
op.SomeFunction("John", 20) // now the intelisense works as expected