Declaring libraries as constants - javascript

In NodeJS world we require modules using require function:
var foo = require ("foo");
In JavaScript (also in NodeJS) we have const keyword that creates a constant:
const
Creates a constant that can be global or local to the function in which it is declared. Constants follow the same scope rules as variables.
Example:
$ node
> const a = 10
undefined
> a
10
> a = 7
7
> a
10
My question is: would it be good to require libraries as constans?
Example:
const foo = require ("foo")
, http = require ("http")
;
/* do something with foo and http */
Are there any bad/good effects using const instead of var when requiring libraries?

It turns out that it became a common practice to use const over var for dependencies. At least in the Node.js source code this is happening:
http.js
So, I guess it's a good practice. I started to use it as well in my modules.

NodeJS hasn't any optimisations for library that requires as const - require is a simple non native function, which nothing is known about the type of the variable to which is assigned. There is source code of require:
Module.prototype.require = function(path) {
assert(util.isString(path), 'path must be a string');
assert(path, 'missing path');
return Module._load(path, this);
};
Module._load = function(request, parent, isMain) {
if (parent) {
debug('Module._load REQUEST ' + (request) + ' parent: ' + parent.id);
}
var filename = Module._resolveFilename(request, parent);
var cachedModule = Module._cache[filename];
if (cachedModule) {
return cachedModule.exports;
}
if (NativeModule.exists(filename)) {
// REPL is a special case, because it needs the real require.
if (filename == 'repl') {
var replModule = new Module('repl');
replModule._compile(NativeModule.getSource('repl'), 'repl.js');
NativeModule._cache.repl = replModule;
return replModule.exports;
}
debug('load native module ' + request);
return NativeModule.require(filename);
}
var module = new Module(filename, parent);
if (isMain) {
process.mainModule = module;
module.id = '.';
}
Module._cache[filename] = module;
var hadException = true;
try {
module.load(filename);
hadException = false;
} finally {
if (hadException) {
delete Module._cache[filename];
}
}
return module.exports;
};
For more time library is an object (I think you don't have library like this module.exports = 10).
You can change all fields of object also if it is declared as const (If you want to get realy const object use Object.freeze(someObject)).
For conclusion: the effect is the same as for a common variable.
Link to V8 Variable Declaration Function used in NodeJS

Related

Why we need to pass module.exports as a parameter since we are already passing module as a parameter?

I have been going through some online tutorials on Node.js. What I understood is that, on using require(./file-path) function, the node gets the content of that file and wraps inside an immediately invoking function
(function(exports, require, module, __filename, __dirname) {
// content
}())
I understood the difference between exports and module.exports. That's all I can see in the internet on searching the above question. But what my question is , why we need to pass module.exports and module to the wrapping IIFE? We could have passed module alone and then get module.exports from it. Is there any advantage on doing like this? Normally when we pass an object to a function , we don't have to pass object.property additionally.
The answer is: historical reasons.
You're right, we could have only module and exports would not be needed but it's still there for backwards compatibility.
It used to be a time when the module wrapper was changed in pretty much every patch release.
In Node 0.1.11 the module wrapper was:
var wrapper = "function (__filename) { "+
" var onLoad; "+
" var onExit; "+
" var exports = this; "+
content+
"\n"+
" this.__onLoad = onLoad;\n"+
" this.__onExit = onExit;\n"+
"};\n";
See: https://github.com/nodejs/node/blob/v0.1.11/src/node.js#L167#L177
As you can see the exports was the same as the this that the wrapper function was called with. You couldn't swap it with a new object and you couldn't even add some reserved keys to it - e.g. you couldn't safely export a property named __onExit.
Then in 0.1.12 it was:
var wrapper = "function (__filename, exports) { " + content + "\n};";
See: https://github.com/nodejs/node/blob/v0.1.12/src/node.js#L243-L245
Here the exports was an object supplied as one of the arguments but you couldn't swap it with a new object, you could only add or remove properties from the object that you got.
Then the 0.1.13 was the first to have this, i.e. require and include:
var wrapper = "function (__filename, exports, require, include) { " + content + "\n};";
See: https://github.com/nodejs/node/blob/v0.1.13/src/node.js#L225-L227
Then 0.1.14 was the first to have __module (with underscores) in the wrapper (and that dropped include):
var wrapper = "var __wrap__ = function (__module, __filename, exports, require) { "
+ content
+ "\n}; __wrap__;";
See: https://github.com/nodejs/node/blob/v0.1.14/src/node.js#L280-L284
And the 0.1.16 was the first to have a module argument (with no underscores) in the wrapper:
var wrapper = "var __wrap__ = function (exports, require, module, __filename) { "
+ content
+ "\n}; __wrap__;";
See: https://github.com/nodejs/node/blob/v0.1.16/src/node.js#L444-L448
It's been changed many times after that but this is the time that the module got introduced making the exports not necessary any more but still a useful shortcut, allowing you to use:
exports.a = 1;
exports.b = 2;
exports.c = 3;
instead of:
module.exports.a = 1;
module.exports.b = 2;
module.exports.c = 3;
though in practice if there was no exports then one would usually write:
const exports = module.exports;
exports.a = 1;
exports.b = 2;
exports.c = 3;
or more likely:
module.exports = {
a: 1,
b: 2,
c: 3,
};
or, to have some checks in static analysis tools:
const a = 1;
const b = 2;
const c = 3;
module.exports = { a, b, c };
There are many ways to do it, it's a pretty flexible mechanism.
Originally it was only exports and require.
Later on, module was added in a backwards-compatible manner, to allow (among other things) overriding the exports object completely.

Javascript: How to require using const

There's one thing I don't understand about modern Javascript. I see a lot of people discussing whether they should use var, const, or let when requiring new modules. Most people say it's const as their first priority and let second, but I don't see many people who are fan of var. However, this code down below would throw a error TS2451: Cannot redeclare block-scoped variable 'other' error. (Note: This error comes from the Typescript compiler, using the commonjs flag.)
main.js
'use strict';
const A = require('./A.js');
const B = require('./B.js');
// do more stuff
A.js
'use strict';
const other = require('./other.js');
class A {
//...
};
module.exports = A;
B.js
'use strict';
const other = require('./other.js');
class B {
//...
};
module.exports = B;
I'm not sure in which cases it's error-less to use const. It seems that it only works when a module is imported in the main module using const, and then everything else in other modules have var for importing the same module. I'd like to know if I'm missing something. Thanks.
EDIT:
This is the code of one of my modules. When I change the vars at the top to const, the error begins. I've also defined the same imports in other modules that are interconnected.
var Datastore = require('nedb');
var validate = require("validate.js");
var path = require('path');
module.exports = class Planet{
private db_filename : string = "db/planets.db";
private ds;
private static instance : Planet = null;
private constructor(){
}
init(db_filename : string) : Planet{
this.ds = new Datastore({ filename: db_filename, autoload: true, timestampData: true });
return this;
}
static get_instance() : Planet{
if(this.instance == null)
this.instance = new Planet();
return this.instance;
}
}
Generally speaking: You can redefine a variable defined with var, you cannot with const/let-defined variables. You should always use const because it throws you errors (as you see) if you accidentally redefine a variable. If you need to modify the variable later on you have to step down to let.
// works
var a = 1;
var a = 2;
// error (because var a is defined above)
let a = 1;
let b = 1;
// error (because let b is defined above)
let b = 2;
// error
const b = 1;
// error
const a = 1;
const c = 1;
// error
const c = 2;
// error
c = 2;
I do not know why your typescript-compiler throws an error. Testing this with plain node.js it works perfectly fine.

JavaScript Dependency Injection

I am new at JavaScript. I wonder how dependency injection is being implemented in JavaScript? I searched the internet but couldn't find anything.
var Injector = {
dependencies: {},
add : function(qualifier, obj){
this.dependencies[qualifier] = obj;
},
get : function(func){
var obj = new func;
var dependencies = this.resolveDependencies(func);
func.apply(obj, dependencies);
return obj;
},
resolveDependencies : function(func) {
var args = this.getArguments(func);
var dependencies = [];
for ( var i = 0; i < args.length; i++) {
dependencies.push(this.dependencies[args[i]]);
}
return dependencies;
},
getArguments : function(func) {
//This regex is from require.js
var FN_ARGS = /^function\s*[^\(]*\(\s*([^\)]*)\)/m;
var args = func.toString().match(FN_ARGS)[1].split(',');
return args;
}
};
The first thing we need a configuration to provide necessary dependencies with qualifiers. To do that, we define a dependency set as dependencies in the Injector class. We use dependency set as our container which will take care of our object instances mapped to qualifiers. In order to add new instance with a qualifier to dependency set, we define an add method. Following that, we define get method to retrieve our instance. In this method, we first find the arguments array and then map those arguments to dependencies. After that, we just construct the object with our dependencies and return it. For more information and examples, please see the post on my blog.
Let's learn it doing a super simple real world example :)
The example class I am going to talk about here is a Printer which needs a driver to print something. I have demonstrated the advantages of dependency injection design pattern in 4 steps to arrive at the best solution in the end.
Case 1: no dependency injection used:
class Printer {
constructor() {
this.lcd = '';
}
/* umm! Not so flexible! */
print(text) {
this.lcd = 'printing...';
console.log(`This printer prints ${text}!`);
}
}
// Usage:
var printer = new Printer();
printer.print('hello');
Usage is simple, it is easy to make a new printer this way but this printer is not flexible.
Case 2: abstract the functionalities inside the print method into a new class called Driver:
class Printer {
constructor() {
this.lcd = '';
this.driver = new Driver();
}
print(text) {
this.lcd = 'printing...';
this.driver.driverPrint(text);
}
}
class Driver {
driverPrint(text) {
console.log(`I will print the ${text}`);
}
}
// Usage:
var printer = new Printer();
printer.print('hello');
So our Printer class is now more modular, clean and easy to understand but It is not flexible yet again. Any time you use new keyword you are actually hard-coding something. In this case you are constructing a driver inside your Printer which in real world is an example of a printer that comes with a built-in driver that can never change!
Case 3: inject an already made driver into your printer
A better version is to inject a driver at the time we construct a printer
meaning you can make any type of printer, color or black & white, because this
time the driver is being made in isolation and outside the Printer class and then
given (INJECTED!) into the Printer…
class Printer {
constructor(driver) {
this.lcd = '';
this.driver = driver;
}
print(text) {
this.lcd = 'printing...';
this.driver.driverPrint(text);
}
}
class BWDriver {
driverPrint(text) {
console.log(`I will print the ${text} in Black and White.`);
}
}
class ColorDriver {
driverPrint(text) {
console.log(`I will print the ${text} in color.`);
}
}
// Usage:
var bwDriver = new BWDriver();
var printer = new Printer(bwDriver);
printer.print('hello'); // I will print the hello in Black and White.
Usage is now different, as a user, in order to have a printer you need to first
construct (make) a driver (of your choice!) and then pass this driver to your printer. It may seem that end user now needs to know a bit more about the system, however this structure gives them more flexibility. Users can pass ANY driver as long as valid! for example let's say we have a BWDriver (black & white) type of driver; user can create a new driver of this type and use that to make a new printer that prints black and white.
So far so good! But what you think we can do better and what you think has still some room to address here?! I am sure you can see it too!
We are creating a new printer each time we need our printer to print with
a different driver! That is because we are passing our driver of choice to
the Printer class at the construction time; if user wants to use another driver they need to create a new Printer with that driver. For example, if now I want to do a color print I need to do:
var cDriver = new ColorDriver();
var printer = new Printer(cDriver); // Yes! This line here is the problem!
printer.print('hello'); // I will print the hello in color.
Case 4: provide a setter function to set the driver of your printer at ANY TIME!
class Printer {
constructor() {
this.lcd = '';
}
setDriver(driver) {
this.driver = driver;
}
print(text) {
this.lcd = 'printing...';
this.driver.driverPrint(text);
}
}
class BWDriver {
driverPrint(text) {
console.log(`I will print the ${text} in Black and White.`);
}
}
class ColorDriver {
driverPrint(text) {
console.log(`I will print the ${text} in color.`);
}
}
// Usage:
var bwDriver = new BWDriver();
var cDriver = new ColorDriver();
var printer = new Printer(); // I am happy to see this line only ONCE!
printer.setDriver(bwDriver);
printer.print('hello'); // I will print the hello in Black and White.
printer.setDriver(cDriver);
printer.print('hello'); // I will print the hello in color.
Dependency Injection is not a really difficult concept to understand. The term may be a bit overloaded but once you have realised its purpose you will find yourself using it most of the time.
You can use AngularJS as an example. Whether it is a good thing, you have to decide for yourself. I wrote a week ago an
article about demistifying dependency injection in AngularJS. Here you can read the code from the article:
// The following simplified code is partly taken from the AngularJS source code:
// https://github.com/angular/angular.js/blob/master/src/auto/injector.js#L63
function inject(fn, variablesToInject) {
var FN_ARGS = /^function\s*[^\(]*\(\s*([^\)]*)\)/m;
var FN_ARG_SPLIT = /,/;
var FN_ARG = /^\s*(_?)(\S+?)\1\s*$/;
var STRIP_COMMENTS = /((\/\/.*$)|(\/\*[\s\S]*?\*\/))/mg;
if (typeof fn === 'function' && fn.length) {
var fnText = fn.toString(); // getting the source code of the function
fnText = fnText.replace(STRIP_COMMENTS, ''); // stripping comments like function(/*string*/ a) {}
var matches = fnText.match(FN_ARGS); // finding arguments
var argNames = matches[1].split(FN_ARG_SPLIT); // finding each argument name
var newArgs = [];
for (var i = 0, l = argNames.length; i < l; i++) {
var argName = argNames[i].trim();
if (!variablesToInject.hasOwnProperty(argName)) {
// the argument cannot be injected
throw new Error("Unknown argument: '" + argName + "'. This cannot be injected.");
}
newArgs.push(variablesToInject[argName]);
}
fn.apply(window, newArgs);
}
}
function sum(x, y) {
console.log(x + y);
}
inject(sum, {
x: 5,
y: 6
}); // should print 11
inject(sum, {
x: 13,
y: 45
}); // should print 58
inject(sum, {
x: 33,
z: 1 // we are missing 'y'
}); // should throw an error: Unknown argument: 'y'. This cannot be injected.
For me yusufaytas answer was exactly what I needed! The only missing features were:
Getting a dependency with custom parameters.
Registering dependencies using callbacks.
I wanted to have the ability to do something like this:
Injector.register('someDependency', function () {
return new ConcreteDependency();
});
function SomeViewModel(userId, someDependency) {
this.userId = userId;
this.someDependency = someDependency;
}
var myVm = Injector.get(SomeViewModel, { "userId": "1234" });
So I ended up with the following code:
var Injector = {
factories = {},
singletons = {},
register: function (key, factory) {
this.factories[key] = factory;
},
registerSingle: function (key, instance) {
this.singletons[key] = instance;
},
get: function (CTor, params) {
var dependencies = this.resolveDependencies(CTor, params);
// a workaround to allow calling a constructor through .apply
// see https://stackoverflow.com/questions/1606797/use-of-apply-with-new-operator-is-this-possible
function MiddlemanCTor() {
CTor.apply(this, dependencies);
}
MiddlemanCTor.prototype = CTor.prototype;
return new MiddlemanCTor();
},
resolveDependencies: function(CTor, params) {
params = params || {};
var args = this.getArguments(CTor);
var dependencies = [];
for (var i = 0; i < args.length; i++) {
var paramName = args[i];
var factory = this.factories[paramName];
// resolve dependency using:
// 1. parameters supplied by caller
// 2. registered factories
// 3. registered singletons
var dependency = params[paramName] ||
(typeof factory === "function" ? factory() : undefined) ||
this.singletons[paramName];
dependencies.push(dependency);
}
return dependencies;
}
getArguments: func(func) {
// Regex from require.js
var FN_ARGS = /^function\s*[^\(]*\(\s*([^\)]*)\)/m;
var args = func.toString().match(FN_ARGS)[1].split(',').map(function (str) {
return str.trim();
});
return args;
}
};
Update - 21.5.2018
I've been using this solution for a few years now. As I moved my code base to TypeScript the solution evolved with it to support both TypeScript and JavaScript. After quite a while that the code was running in production I recently (two days ago) published a library based on this solution. Feel free to check it out, open issues, etc.
peppermint-di
Take a loot at Flyspeck: https://gist.github.com/elfet/11349215
var c = new Flyspeck();
c.set('name', 'GistHub');
c.set('config', {
server: 'https://gist.github.com'
});
c.set('user', function (c) {
return new User(c.get('name'));
});
c.extend('user', function (user, c) {
return new ProxyUser(user);
});
c.set('app', function (c) {
return new Application(c.get('config'), c.get('user'));
});
var app = c.get('app');
I'd say DI is an out-of-the-box feature of JS/ES2015. :-) Of course, it is not full featured IOC containers but looks useful, doesn't it? Check out an example below!
const one = () => 1;
const two = ({one}) => one + one;
const three = ({one, two}) => one + two;
// IOC container
const decimalNumbers = {
get one() { return one(this); },
get two() { return two(this); },
get three() { return three(this); }
};
const binaryTwo = ({one}) => one + 9;
// child IOC container
const binaryNumbers = Object.create(decimalNumbers, {
two: { get() { return binaryTwo(this); } }
});
console.log(`${decimalNumbers.three} is ${binaryNumbers.three} in binary`);
You can wrap dependencies in _.once (see underscore or lodash) to turn them into singletons.
const rand = function() {
return (min, max) => min + Math.random() * (max - min) | 0;
};
const pair = function({rand} = this) {
return [rand(10, 100), rand(100, 1000)];
};
// IOC container
const ioc = Object.create({}, {
rand: {get: rand},
pair: {get: _.once(pair)} // singleton
});
console.log(`${[ioc.pair, ioc.pair === ioc.pair]}`);
<script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.8.3/underscore-min.js"></script>
I coded my own JavaScript Dependency Injection Framework called Di-Ninja
https://github.com/di-ninja/di-ninja
It's full featured and is currently the only one in javascript, as I know, that implement Composition-Root design pattern,
helping you to keep all things decoupled and to wire application components and config at one unique root place.
http://blog.ploeh.dk/2011/07/28/CompositionRoot/
It work well with NodeJS and Webpack
Any feedback would be appreciated
candiJS is a lightweight implicit dependency injection and object creation library. Have a look
Example:
candi.provider.singleton('ajax', function() {
return {
get: function() { /* some code */ },
put: function() { /* some code */ }
};
});
candi.provider.singleton('carService', function(ajax) {
return {
getSpecs: function(manufacturer, year, model, trim) {
return ajax.get();
}
};
});
var Car = candi.provider.instance('Car', function(carService, year, manufacturer, model, trim) {
this.year = year;
this.manufacturer = manufacturer;
this.model = model;
this.trim = trim;
this.specs = carService.getSpecs(manufacturer, year, model, trim);
});
var car = new Car(2009, 'honda', 'accord', 'lx');
Injecting is a lightweight yet powerful DI container, it can well handle promise injection.
Injecting Homepage
Source Code only 100+ lines.
Test Cases to see its examples.
bubble-di is a lightweight DI container for Javascript and Typescript.
It enables you to register factory methods (callbacks) or instances. Below is a simple example (more examples).
npm install --save bubble-di
var {DiContainer} = require("bubble-di");
// import { DiContainer } from "bubble-di";
DiContainer.setContainer(new DiContainer());
class Bar { sayBar(){ console.log("bar"); } }
class Baz { sayBaz(){ console.log("baz"); } }
class Foo {
constructor (bar, baz)
{
bar.sayBar();
baz.sayBaz();
// ...
}
};
DiContainer.getContainer().registerInstance("bar", new Bar());
DiContainer.getContainer().registerInstance("baz", new Baz());
DiContainer.getContainer().register("foo", {
dependencies: ["bar", "baz"],
factoryMethod: (bar, baz) => new Foo(bar, baz) },
);
const foo = DiContainer.getContainer().resolve("foo"); // will print "bar" and "baz".
I am new at JavaScript. I wonder how dependency injection is being implemented in JavaScript? I searched the internet but couldn't find anything.
To be completely honest after working with JavaScript (mainly on the server side) and the whole ecosystem for a few years, I get the feeling that dependency injection (not to mention containers) hasn't really made it into a regular JS programmer's toolbox. That's probably the reason why there's not much information about it out there (it's getting better though).
As opposed to a language such as Java, you cannot rely on static types in JavaScript. This fact alone rules out the traditional way of declaring dependencies via interfaces. You can of course add types to JS (see Flow) but these get elided before the code gets executed. The same applies for TypeScript but I believe there's a way to preserve the types as non enforced metadata. Further, JavaScript doesn't support annotations (although there's a proposal for it).
People have been getting around the limitations in various ways. Some containers parse the function/class definition (as in they call .toString() on the passed function/class and parse the resulting string) and look for dependencies based on the names, some require functions/classes to provide a property/static method to get the list of dependencies.
I've been working myself on a container called Ashley, which simply asks for the dependencies as part of the binding process. No further inspection required.
container.instance('Client', Client, ['DependencyA', 'DependencyB']);
container.instance('DependencyA', DependencyA, ['DependencyC']);
container.instance('DependencyB', DependencyB, ['DependencyC']);
container.instance('DependencyC', DependencyC, [], {
scope: 'Prototype', // Defaults to Singleton
initialize: true,
deinitialize: true
});
const client = await container.resolve('Client');
More examples on GitHub.
Even though this is an old question I feel the urge. ;)
//dependency injection
class Thing1 {
constructor(aThing){
this.otherThing = aThing;
}
}
class Thing2 {}
const thing = new Thing1(new Thing2())
//dependency inversion
class Thing1 {
constructor({
read = null
} = {}){
if(typeof read !== 'function'){
//establish a simple contract
throw new TypeError(read + ' is not a function.');
}
this._read = read;
//Somewhere an instance of Thing1()
//will call this._read()
}
}
class Thing2 {
read(){
//read something
}
}
const thing2 = new Thing2();
const thing1 = new Thing1({
read(){
//Here is the equivalent to the so called "interface"
return thing2.read();
}
});

Node.js double call to require()

//lib.js
var opt = 0
exports.set = function(arg) {
opt = arg
}
exports.prn = function() {
console.log(opt)
}
///prog.js
var lib = require('./lib')
var lib2 = require('./lib')
lib.set(222)
lib2.set(333)
lib.prn()
lib2.prn()
prog.js will output:
333
333
but I need it to output:
222
333
In ohter words, opt must be unique to variable lib and to variable lib2. How to achieve that?
That's because normally nodejs caches its modules which are got via require. You may use the following helper:
// RequireUncached.js
module.exports = function(module) {
delete require.cache[require.resolve(module)]
return require(module);
}
and the usage of the helper:
var requireUncached = require('RequireUncached.js');
requireUncached("./lib");
Have in mind that this approach is considered as bad practice and should not be used. I'll suggest to wrap your logic into a function, require the module and call the function. So, every time you get a new instance.
require will not load scripts multiple times, but always yield the same instance.
If you need different environments, make your module a constructor function that allows to be instantiated multiple times. Store opt on each object for that instead of in the (global) module scope.
// lib.js
module.exports = function constr() {
var opt = 0
this.set = function(arg) {
opt = arg
};
this.print = function() {
console.log(opt)
};
};
// prog.js
var lib = require('./lib'),
inst1 = new lib(),
inst2 = new lib();
/* or short:
var inst1 = new require('./lib')(),
inst2 = new require('./lib')(); */
inst1.set(222)
inst2.set(333)
inst1.print()
inst2.print()
The way the NodeJS module system works, the output is correct and your expectations contradict the design principle here.
Each module is loaded once and only once, and subsequent calls to require simply return the reference to the pre-existing module.
Maybe what you need to do is create a class you can create one or more instances of instead of using module-level globals.
Adding to Bergi's answer, You may also try it like
// prog.js
var lib = require('./lib')(),
lib2 = require('./lib')();
lib.set(222)
lib2.set(333)
lib.print()
lib2.print()
// lib.js
module.exports = function constr() {
var opt = 0
return { set : function(arg) {
opt = arg
},
print : function() {
console.log(opt)
}
}
};
Add this line as first line of your lib.js
delete require.cache[__filename]
now your module becomes in a separate namespace each time you require it.

Share variables between files in Node.js?

Here are 2 files:
// main.js
require('./module');
console.log(name); // prints "foobar"
// module.js
name = "foobar";
When I don't have "var" it works. But when I have:
// module.js
var name = "foobar";
name will be undefined in main.js.
I have heard that global variables are bad and you better use "var" before the references. But is this a case where global variables are good?
Global variables are almost never a good thing (maybe an exception or two out there...). In this case, it looks like you really just want to export your "name" variable. E.g.,
// module.js
var name = "foobar";
// export it
exports.name = name;
Then, in main.js...
//main.js
// get a reference to your required module
var myModule = require('./module');
// name is a member of myModule due to the export above
var name = myModule.name;
I'm unable to find an scenario where a global var is the best option, of course you can have one, but take a look at these examples and you may find a better way to accomplish the same:
Scenario 1: Put the stuff in config files
You need some value that it's the same across the application, but it changes depending on the environment (production, dev or test), the mailer type as example, you'd need:
// File: config/environments/production.json
{
"mailerType": "SMTP",
"mailerConfig": {
"service": "Gmail",
....
}
and
// File: config/environments/test.json
{
"mailerType": "Stub",
"mailerConfig": {
"error": false
}
}
(make a similar config for dev too)
To decide which config will be loaded make a main config file (this will be used all over the application)
// File: config/config.js
var _ = require('underscore');
module.exports = _.extend(
require(__dirname + '/../config/environments/' + process.env.NODE_ENV + '.json') || {});
And now you can get the data like this:
// File: server.js
...
var config = require('./config/config');
...
mailer.setTransport(nodemailer.createTransport(config.mailerType, config.mailerConfig));
Scenario 2: Use a constants file
// File: constants.js
module.exports = {
appName: 'My neat app',
currentAPIVersion: 3
};
And use it this way
// File: config/routes.js
var constants = require('../constants');
module.exports = function(app, passport, auth) {
var apiroot = '/api/v' + constants.currentAPIVersion;
...
app.post(apiroot + '/users', users.create);
...
Scenario 3: Use a helper function to get/set the data
Not a big fan of this one, but at least you can track the use of the 'name' (citing the OP's example) and put validations in place.
// File: helpers/nameHelper.js
var _name = 'I shall not be null'
exports.getName = function() {
return _name;
};
exports.setName = function(name) {
//validate the name...
_name = name;
};
And use it
// File: controllers/users.js
var nameHelper = require('../helpers/nameHelper.js');
exports.create = function(req, res, next) {
var user = new User();
user.name = req.body.name || nameHelper.getName();
...
There could be a use case when there is no other solution than having a global var, but usually you can share the data in your app using one of these scenarios, if you are starting to use node.js (as I was sometime ago) try to organize the way you handle the data over there because it can get messy really quick.
If we need to share multiple variables use the below format
//module.js
let name='foobar';
let city='xyz';
let company='companyName';
module.exports={
name,
city,
company
}
Usage
// main.js
require('./modules');
console.log(name); // print 'foobar'
Save any variable that want to be shared as one object. Then pass it to loaded module so it could access the variable through object reference..
// main.js
var myModule = require('./module.js');
var shares = {value:123};
// Initialize module and pass the shareable object
myModule.init(shares);
// The value was changed from init2 on the other file
console.log(shares.value); // 789
On the other file..
// module.js
var shared = null;
function init2(){
console.log(shared.value); // 123
shared.value = 789;
}
module.exports = {
init:function(obj){
// Save the shared object on current module
shared = obj;
// Call something outside
init2();
}
}
a variable declared with or without the var keyword got attached to the global object. This is the basis for creating global variables in Node by declaring variables without the var keyword. While variables declared with the var keyword remain local to a module.
see this article for further understanding - https://www.hacksparrow.com/global-variables-in-node-js.html
Not a new approach but a bit optimized. Create a file with global variables and share them by export and require. In this example, Getter and Setter are more dynamic and global variables can be readonly. To define more globals, just add them to globals object.
global.js
const globals = {
myGlobal: {
value: 'can be anytype: String, Array, Object, ...'
},
aReadonlyGlobal: {
value: 'this value is readonly',
protected: true
},
dbConnection: {
value: 'mongoClient.db("database")'
},
myHelperFunction: {
value: function() { console.log('do help') }
},
}
exports.get = function(global) {
// return variable or false if not exists
return globals[global] && globals[global].value ? globals[global].value : false;
};
exports.set = function(global, value) {
// exists and is protected: return false
if (globals[global] && globals[global].protected && globals[global].protected === true)
return false;
// set global and return true
globals[global] = { value: value };
return true;
};
examples to get and set in any-other-file.js
const globals = require('./globals');
console.log(globals.get('myGlobal'));
// output: can be anytype: String, Array, Object, ...
globals.get('myHelperFunction')();
// output: do help
let myHelperFunction = globals.get('myHelperFunction');
myHelperFunction();
// output: do help
console.log(globals.set('myGlobal', 'my new value'));
// output: true
console.log(globals.get('myGlobal'));
// output: my new value
console.log(globals.set('aReadonlyGlobal', 'this shall not work'));
// output: false
console.log(globals.get('aReadonlyGlobal'));
// output: this value is readonly
console.log(globals.get('notExistingGlobal'));
// output: false
With a different opinion, I think the global variables might be the best choice if you are going to publish your code to npm, cuz you cannot be sure that all packages are using the same release of your code. So if you use a file for exporting a singleton object, it will cause issues here.
You can choose global, require.main or any other objects which are shared across files.
Otherwise, install your package as an optional dependency package can avoid this problem.
Please tell me if there are some better solutions.
If the target is the browser (by bundling Node code via Parcel.js or similar), you can simply set properties on the window object, and they become global variables:
window.variableToMakeGlobal = value;
Then you can access this variable from all modules (and more generally, from any Javascript context).

Categories

Resources