One of the theoretical benefits from working with Node.js is the possibility to share the same scripts between clients and the server. That would make it possible to degrade the same functionality to the server if the client does not support javascript.
However, the Node.js require() method works on it's own. In the script you load, you can add stuff to this or exports that will later be available in the object that fetched the script:
var stuff = require('stuff');
stuff.show();
In stuff.js:
this.show = function() {
return 'here is my stuff';
}
So, when re-using this script on the client, the .show() method will be added to the window scope. That is not what we want here, instead we would like to add it to a custom namespace.
My only solution so far is something like (in stuff.js):
var ns = typeof exports == 'undefined' ? (function() {
return window['stuff'] = {};
})() : exports;
ns.show = function() {
return 'here is my stuff';
}
delete ns; // remove ns from the global scope
This works quite well since I can call stuff.show() on the server and client. But it looks quirky. I tried searching for solutions but node.js is still very new (even to me) so there are few reliable resources. Does anyone have a better idea on how to solve this?
In short, if you want to re-use scripts don't use Node.js specific stuff you have to go with the lowest common denominator here, the Browser.
Solutions are:
Go overkill and use RequireJS, this will make it work in both Node.js and the Browser. But you need to use the RequireJS format on the server side and you also need to plug in an on the fly converted script...
Do your own loader
Wrap your re-use scripts both on the server and client side with an anonymous function
Now create some code that users call(module) on that function, on the Node side you pass in this for the module, on the client side you pass in a name space object
Keep it simple and stupid, as it is now, and don't use this in the module scope on the Node.js side of things
I wish I could give you a simple out of the box solution, but both environments differ to much in this case. If you really have huge amounts of code, you might consider a build script which generates the files.
Related
I have a C++ project that I compile to Javascript using emscripten. This works, however, for resource limits and interactivity reasons I would like to run this inside a webworker.
However, my project uses the stdin. I found a way to provide my own implementation of stdin by overwriting Module['stdin'] with a function that returns a single character at a time of the total stdin, and closes with 0 as EOF.
This works when the script runs inside the page, as the Module object present in the html file is shared with the script.
When you run as a webworker though, this module object is not shared. Instead, message passing makes sure the regular functionality of Module still works. This does not include 'stdin'.
I worked around this by modifying the output javascript:
A: Adding an implementation of a Module object that includes this stdin specification. This function is modified to read a variable of the webworker as if it were the stdin and feed this on a per-character basis.
B: Changing the onmessage of the webworker to call an additional function handling my own events.
C: This additional function listens to the events and reacts when the event is the content of stdin, by setting the variable that the stdin function I specified reads.
D: adding and removing run dependencies on this additional event to prevent the c++ code running without the stdin specified.
In code:
Module['stdin_pointer'] = 0;
Module['stdin_content'] = "";
Module['stdin']=(function () {
if (Module['stdin_pointer'] < Module['stdin_content'].length) {
code = Module['stdin_content'].charCodeAt(Module['stdin_pointer']);
Module['stdin_pointer']=Module['stdin_pointer']+1;
return code;
} else {
return null;
}
});
external = function(message){
switch(message.data.target){
case 'stdin' : {
Module['idpCode'] = message.data.content;
removeRunDependency('stdin');
break;
}
default: throw 'wha? ' + message.data.target;
}
};
[...]
addRunDependency("stdin");
[...]
//Change this in the original onmessage function:
// default: throw 'wha? ' + message.data.target;
//to
default: {external(message);}
Clearly, this a & c part is quite easy because it can be added at the start (or near the start) of the js file, but b & d (adding your own dependencies and getting your own messagehandler in the loop) requires you to edit the code inline.
As my project is very large, finding the necessary lines to edit can be very cumbersome, only more so in optimized and mimified emscripten code.
Automatic scripts to do this, as well as the workaround itself, are likely to break on new emscripten releases.
Is there a nicer, more proper way to reach the same behavior?
Thank you!
//EDIT:
The --separate-asm flag is quite helpful, in the respect that the file that I must edit is now only a few lines long (in mimified form). It greatly reduces the burden, but it is still not a proper way, so I'm reluctant to mark this as resolved.
The only way I know of achieving what you want is to not use the Emscripten-supplied worker API, and roll your own. All the details are probably beyond the scope of a single question, but at a high level you'll need to...
Compile the worker module with your processing code, but not using the BUILD_AS_WORKER flag
At both the UI and worker ends, you'll need to write some JavaScript code to communicate in/out of the C++ worlds, using one of the techniques at http://kripken.github.io/emscripten-site/docs/porting/connecting_cpp_and_javascript/Interacting-with-code.html, that then directly calls the JavaScript worker API https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers
At the Worker side of this, you will be able to control the Module object, setting stdin as you see fit
As a side-note, I have found that the Emscripten-supplied C++ wrappers for JavaScript functionality, such as workers, graphics, audio, http requests etc, are good to get going at first, but have limitations and don't expose everything that is technically possible. I have often had to roll my own to get the functionally needed. Although not for the same reasons, I have also had to write my own API for workers.
I'm writing client-side code and would like to write multiple, modular JS files that can interact while preventing global namespace pollution.
index.html
<script src="util.js"></script>
<script src="index.js"></script>
util.js
(function() {
var helper() {
// Performs some useful utility operation
}
});
index.js
(function () {
console.log("Loaded index.js script");
helper();
console.log("Done with execution.");
})
This code nicely keeps utility functions in a separate file and does not pollute the global namespace. However, the helper utility function will not be executed because 'helper' exists inside a separate anonymous function namespace.
One alternative approach involves placing all JS code inside one file or using a single variable in the global namespace like so:
var util_ns = {
helper: function() {
// Performs some useful utility operation.
},
etc.
}
Both these approaches have cons in terms of modularity and clean namespacing.
I'm used to working (server-side) in Node.js land where I can 'require' one Javascript file inside another, effectively injecting the util.js bindings into the index.js namespace.
I'd like to do something similar here (but client-side) that would allow code to be written in separate modular files while not creating any variables in the global namespace while allowing access to other modules (i.e. like a utility module).
Is this doable in a simple way (without libraries, etc)?
If not, in the realm of making client-side JS behave more like Node and npm, I'm aware of the existence of requireJS, browserify, AMD, and commonJS standardization attempts. However, I'm not sure of the pros and cons and actual usage of each.
I would strongly recommend you to go ahead with RequireJS.
The modules support approach (without requires/dependencies):
// moduleA.js
var MyApplication = (function(app) {
app.util = app.util || {};
app.util.hypotenuse = function(a, b) {
return Math.sqrt(a * a + b * b);
};
return app;
})(MyApplication || {});
// ----------
// moduleB.js
var MyApplication = (function(app) {
app.util = app.util || {};
app.util.area = function(a, b) {
return a * b / 2;
};
return app;
})(MyApplication || {});
// ----------
// index.js - here you have to include both moduleA and moduleB manually
// or write some loader
var a = 3,
b = 4;
console.log('Hypotenuse: ', MyApplication.util.hypotenuse(a, b));
console.log('Area: ', MyApplication.util.area(a, b));
Here you're creating only one global variable (namespace) MyApplication, all other stuff is "nested" into it.
Fiddle - http://jsfiddle.net/f0t0n/hmbb7/
**One more approach that I used earlier in my projects - https://gist.github.com/4133310
But anyway I threw out all that stuff when started to use RequireJS.*
You should check out browserify, which will process a modular JavaScript project into a single file. You can use require in it as you do in node.
It even gives a bunch of the node.js libs like url, http and crypto.
ADDITION: In my opinion, the pro of browserify is that it is simply to use and requires no own code - you can even use your already written node.js code with it. There's no boilerplate code or code change necessary at all, and it's as CommonJS-compliant as node.js is. It outputs a single .js that allows you to use require in your website code, too.
There are two cons to this, IMHO: First is that two files that were compiled by browserify can override their require functions if they are included in the same website code, so you have to be careful there. Another is of course you have to run browserify every time to make change to the code. And of course, the module system code is always part of your compiled file.
I strongly suggest you try a build tool.
Build tools will allow you to have different files (even in different folders) when developing, and concatenating them at the end for debugging, testing or production. Even better, you won't need to add a library to your project, the build tool resides in different files and are not included in your release version.
I use GruntJS, and basically it works like this. Suppose you have your util.js and index.js (which needs the helper object to be defined), both inside a js directory. You can develop both separately, and then concatenate both to an app.js file in the dist directory that will be loaded by your html. In Grunt you can specify something like:
concat: {
app: {
src: ['js/util.js', 'js/index.js'],
dest: 'dist/app.js'
}
}
Which will automatically create the concatenation of the files. Additionally, you can minify them, lint them, and make any process you want to them too. You can also have them in completely different directories and still end up with one file packaged with your code in the right order. You can even trigger the process every time you save a file to save time.
At the end, from HTML, you would only have to reference one file:
<script src="dist/app.js"></script>
Adding a file that resides in a different directory is very easy:
concat: {
app: {
src: ['js/util.js', 'js/index.js', 'js/helpers/date/whatever.js'],
dest: 'dist/app.js'
}
}
And your html will still only reference one file.
Some other available tools that do the same are Brunch and Yeoman.
-------- EDIT -----------
Require JS (and some alternatives, such as Head JS) is a very popular AMD (Asynchronous Module Definition) which allows to simply specify dependencies. A build tool (e.g., Grunt) on the other hand, allows managing files and adding more functionalities without relying on an external library. On some occasions you can even use both.
I think having the file dependencies / directory issues / build process separated from your code is the way to go. With build tools you have a clear view of your code and a completely separate place where you specify what to do with the files. It also provides a very scalable architecture, because it can work through structure changes or future needs (such as including LESS or CoffeeScript files).
One last point, having a single file in production also means less HTTP overhead. Remember that minimizing the number of calls to the server is important. Having multiple files is very inefficient.
Finally, this is a great article on AMD tools s build tools, worth a read.
So called "global namespace pollution" is greatly over rated as an issue. I don't know about node.js, but in a typical DOM, there are hundreds of global variables by default. Name duplication is rarely an issue where names are chosen judiciously. Adding a few using script will not make the slightest difference. Using a pattern like:
var mySpecialIdentifier = mySpecialIdentifier || {};
means adding a single variable that can be the root of all your code. You can then add modules to your heart's content, e.g.
mySpecialIdentifier.dom = {
/* add dom methods */
}
(function(global, undefined) {
if (!global.mySpecialIdentifier) global.mySpecialIdentifier = {};
/* add methods that require feature testing */
}(this));
And so on.
You can also use an "extend" function that does the testing and adding of base objects so you don't replicate that code and can add methods to base library objects easily from different files. Your library documentation should tell you if you are replicating names or functionality before it becomes an issue (and testing should tell you too).
Your entire library can use a single global variable and can be easily extended or trimmed as you see fit. Finally, you aren't dependent on any third party code to solve a fairly trivial issue.
You can do it like this:
-- main.js --
var my_ns = {};
-- util.js --
my_ns.util = {
map: function () {}
// .. etc
}
-- index.js --
my_ns.index = {
// ..
}
This way you occupy only one variable.
One way of solving this is to have your components talk to each other using a "message bus". A Message (or event) consists of a category and a payload. Components can subscribe to messages of a certain category and can publish messages. This is quite easy to implement, but there are also some out of the box-solutions out there. While this is a neat solution, it also has a great impact on the architecture of your application.
Here is an example implementation: http://pastebin.com/2KE25Par
http://brunch.io/ should be one of the simplest ways if you want to write node-like modular code in your browser without async AMD hell. With it, you’re also able to require() your templates etc, not just JS files.
There are a lot of skeletons (base applications) which you can use with it and it’s quite mature.
Check the example application https://github.com/paulmillr/ostio to see some structure. As you may notice, it’s written in coffeescript, but if you want to write in js, you can — brunch doesn’t care about langs.
I think what you want is https://github.com/component/component.
It's synchronous CommonJS just like Node.js,
it has much less overhead,
and it's written by visionmedia who wrote connect and express.
I've written a js lib for my client to send commands directly to a rmi'ish server. It works nice but now I have to write a bit of server code to connect to the same server. That's why I would like to try Node.js. However I have some difficulty's grasping the concepts.
I would like my libraries to work on the brouwser (chrome, firefox and ie) but also be usable on Node.js, (with small motifications). That way I don't have to maintain 2 chucks of code that are almost the same.
How ever on a brouwser system you just declare a namespace and stuff all functionality in there. In node.js I use require to load my file and stuff the variables it exports into a variable.
for example './m4a.structure.js' sets up a global m4a.structure. and './m4a.cmd.js' uses that structure to generate a butch of methods (that work as proxy's to send stuf to the server). Finaly there is a './jquery.m4a.initialize.js' that bind the stuff to jquery to send the request out and handle the cash and stuff like that.
The first 2 files contain information that is still changing rapidly (week to week) so I would like them to remain usable on both the browser and the Node.js deamon bit. The last file off course isn't that big and rather browser specific therefor i won't mind (and even expect) if I can't port that.
I hope to have been clear and that you can help me a bit.
The simplest way to build a module which is usable as e.g. a node.js module and in the browser is this:
var Foo = function() {
};
Foo.prototype.bar = function() {
};
if(typeof exports !== 'undefined') {
// Node.js or similar
exports.Foo = Foo;
} else {
// Browser -- this is window
this.Foo = Foo;
}
I have created the PHP side of a modular AJAX/PHP framework and now I am trying to implement the client side.
From my previous experience with modular web applications I know that sometimes multiple instances of one particular module are needed. For example, a web based two player game with page parts for each user.
On PHP side I have assigned a unque ID to each constructed instance of the module and I can pass this UID to the browser but I have no idea how to implement the Javascript side of this module instance.
Modules can be loaded all in one go or loaded separately through AJAX (I am using jQuery).
Now I am using a modular approach that I found in some article, but I can redesign it in some other way if that would help to solve this issue without sacrifising modularity and private/public code separation. For now let's say I have a js file with the following:
//Self-Executing Anonymous Func
(function( MyModule, $, undefined ) {
// My Uid
MyModule.UID = "";
//Public Method
MyModule.onLoad = function() {
alert("Hey, you loaded an instance of MyModule with UID " + MyModule.UID);
};
//Private Methods follow
function somethingPrivate( ) {
}
}( window.MyModule = window.MyModule|| {}, jQuery ));
I am using Smarty for templates. Let's say, I have a simple module template like this:
<div id="{$contents.moduleuid}">
here goes the contents of the module which can be accessed from MyModule Javascript code by using this unique moduleuid
</div>
I have set up the server side so each module automatically appends additional template with Javascript:
<script type="text/javascript">
/*
TODO: here I have access to the {$contents.moduleuid}
But I have no idea what to put here to create a unique instance of MyModule
(also it might need loading js file if it was not loaded yet) and I should also set for
this instance MyModule.UID to {$contents.moduleuid}
and also call MyModule.onLoad for this instance after it has loaded its Javascript.
*/
</script>
I am not experienced with advanced Javascript topics so it is unclear to me how I can create a separate instance of MyModule for each module which gets construced server-side? Is it possible at all to create instances of self-executing anonymous functions? If not, then how can I implement and clone Javascript objects with separated private/public code?
My recommendation is to keep the client side and server side loosely coupled. Try to build your modular client application completely with HTML/JS without PHP tricks on it. As I understand, each of your module (or UI component) need to be loosely coupled from the others. In such case there are several other concerns you might need to look for:
How to keep your UI component structure (html), presentation (css) and behavior (JS) self contained (for example in a single folder), so that it can live or die independently
How these self contained components interact with each other
How to manage the configurations/settings of your UI components
Should you be using MVVM or MVC pattern to organize and bind the view to your PHP model
Who decides when to create/show/hide your UI components (for example based on URL for bookmarking)
If your client is a large and complex application, you might need to look for other concerns such as JS optimization, unit testing, documentation, product sub modules, etc.
Have a look at the BoilerplateJS Javascript reference architecture we put forward at http://boilerplatejs.org. It suggests ways to address all concerns I discussed above.
Since you are already using jQuery, you could create a jQuery plugin. The plugin should behave the way you need, and I believe you won't even need a unique ID. Considering each of your module's instance is contained in a div with class module-container, your jQuery code for adding client-side behavior to the divs would be something like this:
$(function(){
// DOM content is loaded
$('.module-container').MyPluginName();
});
The minimal plugin code would be (considering it's in a separate .js file):
(function($){
$.fn.MyPluginName = function() {
// Return this.each to maintain chainability
return this.each(function() {
// Keep a reference to your unique div instance.
var $this = $(this);
// Plugin logic here
});
};
})(jQuery);
If you are using jQueryUI, I also recommend you also look into the "widget factory" (intro, docs), which serves as a base for building powerful, normalized jQuery plugins.
I'm working on a project with Node.js and the server-side code is becoming large enough that I would like to split it off into multiple files. It appears this has been done client-side for ages, development is done by inserting a script tag for each file and only for distribution is something like "Make" used to put everything together. I realize there's no point in concatting all the server-side code so I'm not asking how to do that. The closest thing I can find to use is require(), however it doesn't behave quite like script does in the browser in that require'd files do not share a common namespace.
Looking at some older Node.js projects, like Shooter, it appears this was once not the case, that or I'm missing something really simple in my code. My require'd files cannot access the global calling namespace at compile time nor run time. Is there any simple way around this or are we forced to make all our require'd JS files completely autonomous from the calling scope?
You do not want a common namespace because globals are evil. In node we define modules
// someThings.js
(function() {
var someThings = ...;
...
module.exports.getSomeThings = function() {
return someThings();
}
}());
// main.js
var things = require("someThings");
...
doSomething(things.getSomeThings());
You define a module and then expose a public API for your module by writing to exports.
The best way to handle this is dependency injection. Your module exposes an init function and you pass an object hash of dependencies into your module.
If you really insist on accessing global scope then you can access that through global. Every file can write and read to the global object. Again you do not want to use globals.
re #Raynos answer, if the module file is next to the file that includes it, it should be
var things = require("./someThings");
If the module is published on, and installed through, npm, or explicitly put into the ./node_modules/ folder, then the
var things = require("someThings");
is correct.