Is it a good idea to use conditional dependencies in AMD modules? - javascript

I'm thinking of using conditions for specifying module dependendies in the AMD module system. For example to load libraryA on the browser and libraryB on the server.
This could look like this:
define([window?"libraryA":"libraryB"],function(library){
//do some stuff
});
This would allow me to build an abstraction layer for 2 modules. But is this really a good idea? Are there any drawbacks in doing this?

That approach could cause problems for the build tool.
Update:
After further research, I find that config settings in your main JS file are not read by default by the optimizer. So, a cleaner solution would be to use a different map config for client and server.
Original:
A safer approach may be to define a module that adapts itself to the environment, thus keeping all the conditional code within your module definitions, and leaving all dependency lists in the most reliable format.
// dependent module
define(["libraryAB"], function (library) {
//do some stuff
});
// libraryAB.js dependency module
define([], function () {
return window ?
defineLibraryA() :
defineLibraryB();
});
You could alternatively keep the libraryA and libraryB code separate by defining libraryAB this way.
// libraryAB.js dependency module
define(["libraryA", "libraryB"], function (libraryA, libraryB) {
return window ? libraryA : libraryB;
});
//define libraryA.js and libraryB.js as usual
If you want to avoid executing libraryA on the server or libraryB on the client, you could have these modules return functions and memoize the result if necessary.
The moral is that it's safest to keep all your non-standard code inside module definitions, keeping dependency lists nice and predictable.

Related

NodeJS - Dynamically import built in modules

I'd like to get a built in module (for example Math or path or fs), whether from the global object or require, I thought about doing something like this:
function getModuleByName(name) {
return global[name] || require(name);
}
Is there a way to check that it is indeed a module and not something else? Would this make a security problem?
Is there a way to check that it is indeed a module and not something else?
Other methods but here's an example:
function getModuleByName(name)
{
let module = null;
try {
module = require(name);
} catch (e) {
// Recommend Logging e Somewhere
}
return module;
}
This will graciously fail as null where the module does not exist, or return it.
Would this make a security problem?
Quite possibly, it depends on how it's used. I'd argue it is more of a general design issue however and would blanket say avoid doing it (without any context, you may have a very good reason).
You, like anyone, obviously have a finite amount of modules you could be loading. These modules have all been selected by yourself for your application for specific reasons, or are bundled into your node version natively and are expected parts of your environment.
What you are doing by introducing this functionality is adding the addition of unexpected elements in your environment. If you are using getModuleByName to access a third party library- you should know outright that library is available and as such there's no reason why you can't just require it directly.
--
If you do think your use case warrants this, please let me know what it is as I may never have encountered it before. I have used dynamic imports like the following:
https://javascript.info/modules-dynamic-imports
But that hasn't been for global packages/libraries, but for dynamic reference to modules built internally to the application (i.e. routing to different views, invokation of internal scripts).
These I have protected by ensuring filepaths can't be altered by whitelisting the target directories, making sure each script follows a strict interface per use case and graciously failing where a module doesn't exist (error output "this script does not exist" for the script usage and a 404 view for the routing example).

MongoDB map-reduce (via nodejs): How to include complex modules (with dependencies) in scopeObj?

I'm working on a complicated map-reduce process for a mongodb database. I've split some of the more complex code off into modules, which I then make available to my map/reduce/finalize functions by including it in my scopeObj like so:
const scopeObj = {
userCalculations: require('../lib/userCalculations')
}
function myMapFn() {
let userScore = userCalculations.overallScoreForUser(this)
emit({
'Key': this.userGroup
}, {
'UserCount': 1,
'Score': userScore
})
}
function myReduceFn(key, objArr) { /*...*/ }
db.collection('userdocs').mapReduce(
myMapFn,
myReduceFn,
{
scope: scopeObj,
query: {},
out: {
merge: 'userstats'
}
},
function (err, stats) {
return cb(err, stats);
}
)
...This all works fine. I had until recently thought it wasn't possible to include module code into a map-reduce scopeObj, but it turns out that was just because the modules I was trying to include all had dependencies on other modules. Completely standalone modules appear to work just fine.
Which brings me (finally) to my question. How can I -- or, for that matter, should I -- incorporate more complex modules, including things I've pulled from npm, into my map-reduce code? One thought I had was using Browserify or something similar to pull all my dependencies into a single file, then include it somehow... but I'm not sure what the right way to do that would be. And I'm also not sure of the extent to which I'm risking severely bloating my map-reduce code, which (for obvious reasons) has got to be efficient.
Does anyone have experience doing something like this? How did it work out, if at all? Am I going down a bad path here?
UPDATE: A clarification of what the issue is I'm trying to overcome:
In the above code, require('../lib/userCalculations') is executed by Node -- it reads in the file ../lib/userCalculations.js and assigns the contents of that file's module.exports object to scopeObj.userCalculations. But let's say there's a call to require(...) somewhere within userCalculations.js. That call isn't actually executed yet. So, when I try to call userCalculations.overallScoreForUser() within the Map function, MongoDB attempts to execute the require function. And require isn't defined on mongo.
Browserify, for example, deals with this by compiling all the code from all the required modules into a single javascript file with no require calls, so it can be run in the browser. But that doesn't exactly work here, because I need to be the resulting code to itself be a module that I can use like I use userCalculations in the code sample. Maybe there's a weird way to run browserify that I'm not aware of? Or some other tool that just "flattens" a whole hierarchy of modules into a single module?
Hopefully that clarifies a bit.
As a generic response, the answer to your question: How can I -- or, for that matter, should I -- incorporate more complex modules, including things I've pulled from npm, into my map-reduce code? - is no, you can not safely include complex modules in node code you plan to send to MongoDB for mapReduce jobs.
You mentioned the problem yourself - nested require statements. Now, require is sync, but if you have nested functions inside, these require calls would not be executed until call time, and MongoDB VM would throw at this point.
Consider the following example of three files: data.json, dep.js and main.js.
// data.json - just something we require "lazily"
false
// dep.js -- equivalent of your userCalculations
module.exports = {
isValueTrue() {
// The problem: nested require
return require('./data.json');
}
}
// main.js - from here you send your mapReduce to MongoDB.
// require dependency instantly
const calc = require('./dep.js');
// require is synchronous, the effectis the same if you do:
// const calc = (function () {return require('./dep.js')})();
console.log('Calc is loaded.');
// Let's mess with unwary devs
require('fs').writeFileSync('./data.json', 'false');
// Is calc.isValueTrue() true or false here?
console.log(calc.isValueTrue());
As a general solution, this is not feasible. While vast majority of modules will likely not have nested require statements, HTTP calls, or even internal, service calls, global variables and similar, there are those who do. You cannot guarantee that this would work.
Now, as a your local implementation: e.g. you require exactly specific versions of NPM modules that you have tested well with this technique and you know it will work, or you published them yourself, it is somewhat feasible.
However, even if this case, if this is a team effort, there's bound to be a developer down the line who will not know where your dependency is used or how, use globals (not on purpose, but by ommission, e.g they wrongly calculate this) or simply not know the implications of whatever they are doing. If you have strong integration testing suite, you could guard against this, but the thing is, it's unpredictable. Personally I think that when you can choose between unpredictable and predictable, almost always you should use predictable.
Now, if you have an explicitly stated purpose for a certain library to be used in MongoDB mapReduce, this would work. You would have to guard well against ommisions and problems, and have strong testing infra, but I would make certain the purpose is explicit before feeling safe enough to do this. But of course, if you're using something that is so complex that you need several npm packages to do, maybe you can have those functions directly on MongoDB server, maybe you can do your mapReducing in something better suited for the purpose, or similar.
To conclude: As a purposefuly built library with explicit mission statement that it is to be used with node and MongoDB mapReduce, I would make sure my tests cover all my mission-critical and important functionality, and then import such npm package. Otherwise I would not use nor recommend this approach.

Building AMD code for Node and browser

I have been searching around various repositories and blogs around the web and can't seem to find what I am looking for.
The Problem
When building code via AMD using something like RequireJS you can expect to set up your modules like so:
Module one:
define(['a'], function () {
return {
methodOne: function () {
return 'something';
}
}
});
Module two:
define(['b'], function () {
return {
methodTwo: function () {
return 'something';
}
}
});
This helps a lot with building a project. It helps separates concerns out into multiple files rather than having one large one.
But how do you optimize this into one file? I would assume that you would use some sort of build tool and come out with something like:
define(function () {
return {
/* other dependency functions */
methodOne: function () {
return 'something';
},
methodTwo: function () {
return 'something';
}
}
});
For AMD environments. This essentially would just combine all of the modules together into one file.
So I have a few questions:
How would I optimize my AMD code correctly in a build like I've
mentioned above? Is this even the best approach?
How do I make my code portable for AMD, Node and the browser??
What I've Tried/Found
It looks like RequireJS has it's own build tool (http://requirejs.org/docs/optimization.html) to do something like this. After going through the documentation and trying out the build tool for myself, I've found that it just combines all of the defines into one file. Am I missing something here? This isn't very useful.
The UMD pattern seems to be what normalizes your code to work with AMD, Node and the browser (http://know.cujojs.com/tutorials/modules/authoring-umd-modules, https://github.com/umdjs/umd). Does this mean that every AMD file I am using I have to convert to UMD? This seems like a headache.
I have stumbled upon uRequire (http://urequire.org/) which seems like it's the answer I am looking for. Being able to build code into UMD from AMD or CommonJS pattern.
This is an issue that has been eluding me. UMD looks cool, but it seems like the hangup there is making your code available for the browser easily.
Any help with this would be much appreciated.
Regarding the RequireJS optimizer usefulness, I use the RequireJS rjs optimizer for over a year to create bundles of modules that have to come together (and synchronously).
This is being very helpful because I am saving on the number of requests and also on the asynchronous overhead where require has to figure out all the dependency tree, because that simply doesn't need to happen.
For instance, if you think in terms of broader modules, a.k.a. sub-applications or even features, you can just give another lecture to your app where instead of loading 40+ requirejs modules you would then be loading just 5 sub-apps/features, and each block of them would come altogether and synchronously.
Regarding UMD and uRequire, I suggest having a look also on https://github.com/systemjs/systemjs - it seems to be a clever approach. Yet uRequire seems to be very nice as well.

Approaches to modular client-side Javascript without namespace pollution

I'm writing client-side code and would like to write multiple, modular JS files that can interact while preventing global namespace pollution.
index.html
<script src="util.js"></script>
<script src="index.js"></script>
util.js
(function() {
var helper() {
// Performs some useful utility operation
}
});
index.js
(function () {
console.log("Loaded index.js script");
helper();
console.log("Done with execution.");
})
This code nicely keeps utility functions in a separate file and does not pollute the global namespace. However, the helper utility function will not be executed because 'helper' exists inside a separate anonymous function namespace.
One alternative approach involves placing all JS code inside one file or using a single variable in the global namespace like so:
var util_ns = {
helper: function() {
// Performs some useful utility operation.
},
etc.
}
Both these approaches have cons in terms of modularity and clean namespacing.
I'm used to working (server-side) in Node.js land where I can 'require' one Javascript file inside another, effectively injecting the util.js bindings into the index.js namespace.
I'd like to do something similar here (but client-side) that would allow code to be written in separate modular files while not creating any variables in the global namespace while allowing access to other modules (i.e. like a utility module).
Is this doable in a simple way (without libraries, etc)?
If not, in the realm of making client-side JS behave more like Node and npm, I'm aware of the existence of requireJS, browserify, AMD, and commonJS standardization attempts. However, I'm not sure of the pros and cons and actual usage of each.
I would strongly recommend you to go ahead with RequireJS.
The modules support approach (without requires/dependencies):
// moduleA.js
var MyApplication = (function(app) {
app.util = app.util || {};
app.util.hypotenuse = function(a, b) {
return Math.sqrt(a * a + b * b);
};
return app;
})(MyApplication || {});
// ----------
// moduleB.js
var MyApplication = (function(app) {
app.util = app.util || {};
app.util.area = function(a, b) {
return a * b / 2;
};
return app;
})(MyApplication || {});
// ----------
// index.js - here you have to include both moduleA and moduleB manually
// or write some loader
var a = 3,
b = 4;
console.log('Hypotenuse: ', MyApplication.util.hypotenuse(a, b));
console.log('Area: ', MyApplication.util.area(a, b));
Here you're creating only one global variable (namespace) MyApplication, all other stuff is "nested" into it.
Fiddle - http://jsfiddle.net/f0t0n/hmbb7/
**One more approach that I used earlier in my projects - https://gist.github.com/4133310
But anyway I threw out all that stuff when started to use RequireJS.*
You should check out browserify, which will process a modular JavaScript project into a single file. You can use require in it as you do in node.
It even gives a bunch of the node.js libs like url, http and crypto.
ADDITION: In my opinion, the pro of browserify is that it is simply to use and requires no own code - you can even use your already written node.js code with it. There's no boilerplate code or code change necessary at all, and it's as CommonJS-compliant as node.js is. It outputs a single .js that allows you to use require in your website code, too.
There are two cons to this, IMHO: First is that two files that were compiled by browserify can override their require functions if they are included in the same website code, so you have to be careful there. Another is of course you have to run browserify every time to make change to the code. And of course, the module system code is always part of your compiled file.
I strongly suggest you try a build tool.
Build tools will allow you to have different files (even in different folders) when developing, and concatenating them at the end for debugging, testing or production. Even better, you won't need to add a library to your project, the build tool resides in different files and are not included in your release version.
I use GruntJS, and basically it works like this. Suppose you have your util.js and index.js (which needs the helper object to be defined), both inside a js directory. You can develop both separately, and then concatenate both to an app.js file in the dist directory that will be loaded by your html. In Grunt you can specify something like:
concat: {
app: {
src: ['js/util.js', 'js/index.js'],
dest: 'dist/app.js'
}
}
Which will automatically create the concatenation of the files. Additionally, you can minify them, lint them, and make any process you want to them too. You can also have them in completely different directories and still end up with one file packaged with your code in the right order. You can even trigger the process every time you save a file to save time.
At the end, from HTML, you would only have to reference one file:
<script src="dist/app.js"></script>
Adding a file that resides in a different directory is very easy:
concat: {
app: {
src: ['js/util.js', 'js/index.js', 'js/helpers/date/whatever.js'],
dest: 'dist/app.js'
}
}
And your html will still only reference one file.
Some other available tools that do the same are Brunch and Yeoman.
-------- EDIT -----------
Require JS (and some alternatives, such as Head JS) is a very popular AMD (Asynchronous Module Definition) which allows to simply specify dependencies. A build tool (e.g., Grunt) on the other hand, allows managing files and adding more functionalities without relying on an external library. On some occasions you can even use both.
I think having the file dependencies / directory issues / build process separated from your code is the way to go. With build tools you have a clear view of your code and a completely separate place where you specify what to do with the files. It also provides a very scalable architecture, because it can work through structure changes or future needs (such as including LESS or CoffeeScript files).
One last point, having a single file in production also means less HTTP overhead. Remember that minimizing the number of calls to the server is important. Having multiple files is very inefficient.
Finally, this is a great article on AMD tools s build tools, worth a read.
So called "global namespace pollution" is greatly over rated as an issue. I don't know about node.js, but in a typical DOM, there are hundreds of global variables by default. Name duplication is rarely an issue where names are chosen judiciously. Adding a few using script will not make the slightest difference. Using a pattern like:
var mySpecialIdentifier = mySpecialIdentifier || {};
means adding a single variable that can be the root of all your code. You can then add modules to your heart's content, e.g.
mySpecialIdentifier.dom = {
/* add dom methods */
}
(function(global, undefined) {
if (!global.mySpecialIdentifier) global.mySpecialIdentifier = {};
/* add methods that require feature testing */
}(this));
And so on.
You can also use an "extend" function that does the testing and adding of base objects so you don't replicate that code and can add methods to base library objects easily from different files. Your library documentation should tell you if you are replicating names or functionality before it becomes an issue (and testing should tell you too).
Your entire library can use a single global variable and can be easily extended or trimmed as you see fit. Finally, you aren't dependent on any third party code to solve a fairly trivial issue.
You can do it like this:
-- main.js --
var my_ns = {};
-- util.js --
my_ns.util = {
map: function () {}
// .. etc
}
-- index.js --
my_ns.index = {
// ..
}
This way you occupy only one variable.
One way of solving this is to have your components talk to each other using a "message bus". A Message (or event) consists of a category and a payload. Components can subscribe to messages of a certain category and can publish messages. This is quite easy to implement, but there are also some out of the box-solutions out there. While this is a neat solution, it also has a great impact on the architecture of your application.
Here is an example implementation: http://pastebin.com/2KE25Par
http://brunch.io/ should be one of the simplest ways if you want to write node-like modular code in your browser without async AMD hell. With it, you’re also able to require() your templates etc, not just JS files.
There are a lot of skeletons (base applications) which you can use with it and it’s quite mature.
Check the example application https://github.com/paulmillr/ostio to see some structure. As you may notice, it’s written in coffeescript, but if you want to write in js, you can — brunch doesn’t care about langs.
I think what you want is https://github.com/component/component.
It's synchronous CommonJS just like Node.js,
it has much less overhead,
and it's written by visionmedia who wrote connect and express.

How to manage multiple JS files server-side with Node.js

I'm working on a project with Node.js and the server-side code is becoming large enough that I would like to split it off into multiple files. It appears this has been done client-side for ages, development is done by inserting a script tag for each file and only for distribution is something like "Make" used to put everything together. I realize there's no point in concatting all the server-side code so I'm not asking how to do that. The closest thing I can find to use is require(), however it doesn't behave quite like script does in the browser in that require'd files do not share a common namespace.
Looking at some older Node.js projects, like Shooter, it appears this was once not the case, that or I'm missing something really simple in my code. My require'd files cannot access the global calling namespace at compile time nor run time. Is there any simple way around this or are we forced to make all our require'd JS files completely autonomous from the calling scope?
You do not want a common namespace because globals are evil. In node we define modules
// someThings.js
(function() {
var someThings = ...;
...
module.exports.getSomeThings = function() {
return someThings();
}
}());
// main.js
var things = require("someThings");
...
doSomething(things.getSomeThings());
You define a module and then expose a public API for your module by writing to exports.
The best way to handle this is dependency injection. Your module exposes an init function and you pass an object hash of dependencies into your module.
If you really insist on accessing global scope then you can access that through global. Every file can write and read to the global object. Again you do not want to use globals.
re #Raynos answer, if the module file is next to the file that includes it, it should be
var things = require("./someThings");
If the module is published on, and installed through, npm, or explicitly put into the ./node_modules/ folder, then the
var things = require("someThings");
is correct.

Categories

Resources