JavaScript Publish / Subscribe Pattern: Showing Chain of Events? - javascript

Say I am using a pub/sub pattern with highly modularized code. When a function in one module sends out a 'publish', how I do make clear what functions in other modules are subscribing to that trigger?
For example:
// In module 1
function foo(data) {
publish("trigger", data);
}
// In module 2
function bar(data) {}
subscribe("trigger", bar);
// In module 3
function baz(data) {}
subscribe("trigger", baz);
After reading module 1 and seeing that a 'publish' is been sent out, how would someone know where to look in my code for the subscribed callbacks?
An obvious solution might be to comment what modules contain functions that subscribe to the trigger, but that seems an impractical solution when dealing with a large number of publishes / subscribers.
I feel like I'm not fully understanding how to use the pub/sub pattern, since to me, the pattern seems to have no transparency whatsoever regarding function chains.
EDIT
My question pertains to making my code clear and easy to understand for someone reading my source code. I understand that during runtime, I could programmatically find the list of stored subscribers, by access the array of stored callbacks. But that does nothing for making my raw source code more easily understood.
For example, I current use a pattern like this:
// Main Controller Module
function foo(data) {
module2.bar();
module3.bar();
}
// In module 2
function bar(data) {}
// In module 3
function baz(data) {}
For starters, what is the proper term for this module? I thought it was a 'mediator' pattern, but looking here, it seems a mediator pattern is more like what I thought a pub/sub was?
With this pattern I feel the flow of my code is completely transparent. The reader doesn't need to dig around to find out what functions in other modules foo() might call.
But with the pub/sub pattern, once I send out the publish from foo(), it's like the reader has to somehow find the modules where the subscribed functions are.
But of course the downside of the above pattern is heavy dependency: module 1 needs both module 2 and 3 injected before it can call bar() and baz().
So I want to adopt the loose coupling of the pub/sub pattern, but I also want to keep the function flow transparency that the above pattern gives me. Is this possible? Or is this just the inherent trade-off of a pub/sub pattern?
To Mod:
Please delete question. I wrote this question poorly and would like to re-ask the question in a clearer manner. thanks.

I thought the whole idea of publish subscribe or mediator is to loosely couple objects. Object1 doesn't need to know what happens who does what it is only concerned doing it's own thing and notifying whoever is interested that it's done doing what it does.
I register listeners only in a controller class and not all over the code. When the controller needs to do add or remove listeners then break up your process in steps that will inform the controller first (create appropriate events for it).
For example:
We fetch data with XHR.
Based on the data we create processors, processors are created with factory.
Processors process data.
Data is displayed.
Process is finished.
In your controller you could have:
var Controller = {
//fetch information and display it
fetch : function(paramObj){
var subscribeIds = [];
//to have xhr listen to fetch can be done in an init function
// no need to add and remove every time but make a note here
// that it's registered in init and part of this process
subscribeIds.push(Mediator.subscribe(xhr.fetch,"fetch"));
//xhr will trigger dataFetched
subscribeIds.push(Mediator.subscribe(Controller.initProsessor,"dataFetched"));
//Controller will trigger displayFetched
subscribeIds.push(Mediator.subscribe(dom.displayFetched,"displayFetched"));
subscribeIds.push(Mediator.subscribe(Controller.displayedFetched,"displayedFetched"));
paramObj.suscribeIds = subsribeIds;
Mediator.trigger("fetch",paramObj);
},
initProsessor : function(paramObj){
var processor = Processor.make(paramObj.data.type);
paramObj.html = processor.process(data);
Mediator.trigger("displayFetched",paramObj);
},
displayedFetched : function(paramObj){
//You can decide the process is done here or take other steps
// based on paramObj
//You can unsubscribe listeners or leave them, when you leave them
// they should not be registered in the fetch function but rather
// in an init function of Controller with comments saying
// basic fetch procedure
Controller.cleanupListeners(paramObj.subscribeIds);
},
cleanupListeners : function(listenersIds){
Mediator.unSubscribe(listenersIds);
}
}
The code looks more complicated than needs to be. Someone looking at it may think why not let XHR make a Processor instance and tell it to process? The reason is that the Controller literally controls the flow of the application, if you want some other things to happen in between you can add them. As your application grows you'll add more and more processes and sometimes re factor functions to do less specific things so they can be better re used. Instead of possibly having to change your code in several files you now only re define the process(es) in the Controller.
So to answer your question as to where to find the listeners and where events are registered: in the controller.
If you have a single Mediator object you can have it dump listeners at any time, just write a dump method that will console.log the event names and functions.toString().

Related

Backbone.Radio: what's the advantage of commands and requests over events

It seems like Backbone.Radio provides 2 new abstractions - commands and requests. They're pretty much identical to Backbone.Events, except that they can have only 1 subscriber. Is that it, and if so what advantage do they provide over events?
I plan on using Backbone.Events/Radio with React.js, if that helps.
I have not actually used Backbone.Radio but do make extensive use of Backbone.wreqr https://github.com/marionettejs/backbone.wreqr which provides an almost identical command service.
In my usage the difference between events and commands is:
For events to work the sender and receiver of an event must both exist and have a reference to each other and the receiver must be in a position to deal with the event properly. This can often be problematic in a fully asynchronous browser environment where different parts of your application are running at the same time.
Commands allow you to decouple the sender and receiver. One object, lets say a View A, can simply send command 'update_user_details'.
My second Object View B sets up a command handler for 'update_user_details' which will change the user details on the screen.
But what if View B does not yet exist, or is not yet rendered. In the event listener pattern you would have to make sure View A exists, that it passes a reference to itself to View B and then you attach an event listener in View B.
With commands it is not a problem, View A sends a command, if no-one has set a handler then nothing bad happens, the command just does nothing.
When View B turns up, totally independent of View A, it sets a handler and will respond to all future commands.
Just a final note about intent:
The event pattern can be thought about in this way: I, View A have just done something, anyone that is interested (the event listeners) can do what they like about it, I View A don't care what you do.
In the command pattern: I View A want someone to do something, I don't care who does it, I just want it done right.
Channels. The key difference with Backbone.Radio over plain vanilla Backbone.Events that I have seen is that it allows you to setup channels to which your code can 'tune in' e.g. from the documentation:
var userChannel = Backbone.Radio.channel('user');
This means that logical functions or apps in your code can emit and handle events only on a specific channel - even if you emit events with the same name, if they're on different channels you won't get cross-contamination. This ties in nicely with the principles behind separation of duties in your code.
The other other difference, and IMHO it's subtle, more to do with elegance of coding than any real functionality difference, is that if you're telling something to respond to an event then it's really a Command, and Backbone.Radio allows you to separate these kinds of event into that type. Similar logic applies to the Requests type.
For completeness...
The docs also explain that a Channel is an object that has all three types of messages (Events, Commands and Requests) mixed in. You mix it into an object (I use Marionette so I'm mixing into an instance of Marionette.Object) using Underscore/Lo-Dash's .extend():
_.extend(objectToBeExtended, Backbone.Radio.Requests);
And the same for Commands of course. The syntax for events is different as that's baked into Backbone itself so the second parameter is just Backbone.Events.

How can I make this pub/sub code more readable?

I am investigating the pub/sub pattern because I am reading a book that highly advocates event driven architecture, for the sake of loose coupling. But I feel that the loose coupling is only achieved by sacrificing readability/transparency.
I'm having trouble understanding how to write easily-understood pub/sub code. The way I currently write my code results in a lot of one-to-one channels, and I feel like doing so is a bad practice.
I'm using require.js AMD modules, which means that I have many smaller-sized files, so I feel like it would be very difficult for someone to follow the flow of my publishes.
In my example code below, there are three different modules:
The UI / Controller module, handling user clicks
A translator module
A data storage module
The gist is that a user submits text, it gets translated to english, then stored into a database. This flow is split into three modules in their own file.
// Main Controller Module
define(['pubSub'] function(pubSub) {
submitButton.onclick(function() {
var userText = textArea.val();
pubSub.publish("userSubmittedText", userText);
});
});
// Translator module
define(['pubSub'] function(pubSub) {
function toEnglish(text) {
// Do translation
pubSub.publish("translatedText", translatedText);
};
pubSub.subscribe("userSubmittedText", toEnglish);
});
// Database module
define(['pubSub'] function(pubSub) {
function store(text) {
// Store to database
};
pubSub.subscribe("translatedText", store);
});
For a reader to see the complete flow, he has to switch between the three modules. But how you would make clear where the reader should look, after seeing the first pubSub.publish("userSubmittedText", userText);?
I feel like publishes are like a cliff hanger, where the reader wants to know what is triggered next, but he has to go and find the modules with subscribed functions.
I could comment EVERY publish, explaining what modules contain the functions that are listening, but that seems impractical. And I don't think that is what other people are doing.
Furthermore, the above code uses one-to-one channels, which I think is bad style, but I'm not sure. Only the Translator module's toEnglish() function will ever subscribe to the pubSub channel "userSubmittedText", yet I have to create the new channel for what is basically a single function call. While this way my Controller module doesn't have to have Translator as a dependency, it just doesn't feel like true decoupling.
This lack of function flow transparency is concerning to me, as I have no idea how someone reading such source code would know how to follow along. Clearly I must be missing something important. Maybe I'm not using a helpful convention, or maybe my publish event names are not descriptive enough?
Is the loose coupling of pub/sub only achieved by sacrificing of flow transparency?
The idea of the publish subscribe pattern is that you don't make any assumptions about who has subscribed to a topic or who is publishing. From Wikipedia (http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern):
[...] Instead, published messages are characterized into classes,
without knowledge of what, if any, subscribers there may be.
Similarly, subscribers express interest in one or more classes, and
only receive messages that are of interest, without knowledge of what,
if any, publishers there are.
If your running code doesn't make any assumptions, your comments shouldn't do either. If you want a more readable way of module communication, you can use requirejs' dependency injection instead, which you already do with your pubsub module. This way, you could make it easier to read the code (which brings other disadvantages). It all depends on what you want to achieve...

Global scope for every request in NodeJS Express

I have a basic express server that needs to store some global variables during each request handling.
More in depth, request handling involves many operation that need to be stored in a variable such as global.transaction[]
Of course if I use the global scope, every connection will share information of its transaction and I need a global scope because I need to access the transaction array from many other modules, during my execution.
Any suggestion on this problem? I feel like is something very trivial but I'm looking for complicated solutions :)
Many thanks!
UPDATE
This is a case scenario, to be more clear.
On every request I have 3 modules (ModuleA, ModuleB, ModuleC) which read the content of 10 random files in one directory. I want to keep track of the list of file names read by every request, and send back with res.write the list.
So ModuleA/B/C need to access a sort of global variable but the lists of request_1, request_2, request_3 etc... don't have to mix up.
Here is my suggestion avoid global state like fire.
It's the number one maintenance problem in Node servers from my experience.
It makes your code not composable and harder to reuse.
It creates implicit dependencies in your code - you're never sure which piece depends on which and it's not easy to verify.
You want the parts of code that each piece of an application uses to be as explicit as possible. It's a huge issue.
The issue
We want to synchronize state across multiple requests and act accordingly. This is a very big problem in writing software - some say even the biggest. The importance of the way objects in the application communicate can not be overestimated.
Some solutions
There are several ways to accomplish sharing state across requests or server wide in a Node server. It depends on what you want to do. Here are the two most common imo.
I want to observe what the requests do.
I want one request to do things based on what another request did.
1. I want to observe what the requests do
Again, there are many ways to do this. Here are the two I see most.
Using an event emitter
This way requests emit events. The application reads events the requests fire and learns about them accordingly. The application itself could be an event emitter you can observe from the outside.
You can do something like:
request.emit("Client did something silly", theSillyThing);
And then listen to it from the outside if you choose to.
Using an observer pattern
This is like an event emitter but reversed. You keep a list of dependencies on the request and call a handler method on them yourself when something interesting happens on the request.
Personally, I usually prefer an event emitter because I think they usually solve the case better.
2. I want one request to do things based on what another request did.
This is a lot tricker than just listening. again, there are several approaches here. What they have in common is that we put the sharing in a service
Instead of having global state - each request gets access to a service - for example when you read a file you notify the service and when you want a list of read files - you ask the service. Everything is explicit in the dependency.
The service is not global, only dependencies of it. For example, it can coordinate resources and the data, being some form of Repository).
Nice theory! Now what about my use case?
Here are two options for what I would do in your case. It's far from the only solution.
First option:
Each of the modules are an event emitter, whenever they read a file they emit an event.
A service listens to all their events and keeps count.
Requests have access to that service explicitly and can query it for a list of files.
Requests perform writes through the modules themselves and not the added service.
Second option:
Create a service that owns a copy of module1, module2 and module3. (composition)
The service delegates actions to the modules based on what is required from it.
The service keeps the list of files accessed since the requests were made through it.
The request stops using the modules directly - uses the service instead.
Both these approaches have advantages and disadvantages. A more complicated solution might be required (those two are in practice pretty simple to do) where the services are abstracted further but I think this is a good start.
One simple way is storing data on the request object.
Here is an example (using Express):
app.get('/hello.txt', function(req, res){
req.transaction = req.transaction || [];
if (req.transaction.length) {
// something else has already written to this array
}
});
However, I don't really see how you can need this. When you call moduleA or moduleB, you just have to pass an object as argument, and it solves your issue. Maybe you're looking for dependency injection?
using koa ctx.state doc for this scenario, in express I believe this Plugin should serve your needs.
in order to keep some data that will be resused by another request on the save server app, I propose to use session in expresse and avoid any global state or any props drilling from one request to another.
In order to manage session state in express you could use :
session-file-store save the session in a file
express-mongodb-session : save the session in mongoDb
mssql-session-store -> for a relation db
Of course there is another technique ti manage session in NodeJs.

Durandal (knockout) app with multilanguage support

I am building multilingual support for the app I'm working on. After doing some research and reading SO (internationalization best practice) I am trying to integrate that in a 'framework-friendly' way.
What I have done at the moment is following:
Created .resource modules formatted like so:
resources.en-US.js
define(function () {
return {
helloWorlLabelText: "Hello world!"
}
});
On the app.start I get the resource module with requirejs and assign all data to app.resources. Inside of each module specific resource is assigned to observables and bonded with text binding to labels and other text related things. Like so:
define(function (require) {
var app = require('durandal/app'),
router = require('durandal/plugins/router')
};
return{
helloWorldLabelText: ko.observable(app.resources.helloWorldLabelText),
canDeactivate: function () {
}
}
});
On the view:
<label for="hello-world" data-bind="text: helloWorldLabelText"></label>
The resources are swapped just by assigning new module to app.resources.
Now the problem is when the language is changed and some of the views have been already rendered, the values of previous language are still there. So I ended up reassigning observables inside of activate method. Also tried wrapping app.resources into observable, but that didn't work either.
I don't think I ended up with the most clean way and maybe anybody else had some other way that could share. Thanks.
For those who are still confused about best practices, those who feel that something is lacking, or those who are simply curious about how to implement things in a better way with regard to Durandal, Knockout, RequireJS, and client-side web applications in general, here is an attempt at a more useful overview of what's possible.
This is certainly not complete, but hopefully this can expand some minds a little bit.
First, Nov 2014 update
I see this answer keeps being upvoted regularly even a year later. I hesitated to update it multiple times as I further developed our particular solution (integrating i18next to Durandal/AMD/Knockout). However, we eventually dropped the dependent project because of internal difficulties and "concerns" regarding the future of Durandal and other parts of our stack. Hence, this little integration work was canceled as well.
That being said, I hopefully distinguished generally applicable remarks from specific remarks below well enough, so I think they keep offering useful (perhaps even well needed) perspectives on the matters.
If you're still looking to play with Durandal, Knockout, AMD and an arbitrary localization library (there are some new players to evaluate, by the way), I've added a couple of notes from my later experiences at the end.
On the singleton pattern
One problem with the singleton pattern here is that it's hard to configure per-view; indeed there are other parameters to the translations than their locale (counts for plural forms, context, variables, gender) and these may themselves be specific to certain contexts (e.g. views/view models).
By the way it's important that you don't do this yourself and instead rely on a localization library/framework (it can get really complex). There are many questions on SO regarding these projects.
You can still use a singleton, but either way you're only halfway there.
On knockout binding handlers
One solution, explored by zewa666 in another answer, is to create a KO binding handler. One could imagine this handler taking these parameters from the view, then using any localization library as backend. More often than not, you need to change these parameters programmatically in the viewmodel (or elsewhere), which means you still need to expose a JS API.
If you're exposing such an API anyway, then you may use it to populate your view model and skip the binding handlers altogether. However, they're still a nice shortcut for those strings that can be configured from the view directly. Providing both methods is a good thing, but you probably can't do without the JS API.
Current Javascript APIs, document reloading
Most localization libraries and frameworks are pretty old-school, and many of them expect you to reload the entire page whenever the user changes the locale, sometimes even when translation parameters change, for various reasons. Don't do it, it goes against everything a client-side web application stands for. (SPA seems to be the cool term for it these days.)
The main reason is that otherwise you would need to track each DOM element that you need to retranslate every time the locale changes, and which elements to retranslate every time any of their parameters change. This is very tedious to do manually.
Fortunately, that's exactly what data binders like knockout make very easy to do. Indeed, the problem I just stated should remind you of what KO computed observables and KO data-bind attributes attempt to solve.
On the RequireJS i18n plugin
The plugin both uses the singleton pattern and expects you to reload the document. No-go for use with Durandal.
You can, but it's not efficient, and you may or may not uselessly run into problems depending on how complex your application state is.
Integration of knockout in localization libraries
Ideally, localization libraries would support knockout observables so that whenever you pass them an observable string to translate with observable parameters, the library gives you an observable translation back. Intuitively, every time the locale, the string, or the parameters change, the library modifies the observable translation, and should they be bound to a view (or anything else), the view (or whatever else) is dynamically updated without requiring you to do anything explicitly.
If your localization library is extensible enough, you may write a plugin for it, or ask the developers to implement this feature, or wait for more modern libraries to appear.
I don't know of any right now, but my knowledge of the JS ecosystem is pretty limited. Please do contribute to this answer if you can.
Real world solutions for today's software
Most current APIs are pretty straightforward; take i18next for example. Its t (translate) method takes a key for the string and an object containing the parameters. With a tiny bit of cleverness, you can get away with it without extending it, using only glue code.
translate module
define(function (require) {
var ko = require('knockout');
var i18next = require('i18next');
var locale = require('locale');
return function (key, opts) {
return ko.computed(function () {
locale();
var unwrapped = {};
if (opts) {
for (var optName in opts) {
if (opts.hasOwnProperty(optName)) {
var opt = opts[optName];
unwrapped[optName] = ko.isObservable(opt) ? opt() : opt;
}
}
}
return i18next.t(key, unwrapped);
});
}
});
locale module
define(function (require) { return require('knockout').observable('en'); });
The translate module is a translation function that supports observable arguments and returns an observable (as per our requirements), and essentially wraps the i18next.t call.
The locale module is an observable object containing the current locale used globally throughout the application. We define the default value (English) here, you may of course retrieve it from the browser API, local storage, cookies, the URI, or any other mechanism.
i18next-specific note: AFAIK, the i18next.t API doesn't have the ability to take a specific locale per translation: it always uses the globally configured locale. Because of this, we must change this global setting by other means (see below) and place a dummy read to the locale observable in order to force knockout to add it as a dependency to the computed observable. Without it, the strings wouldn't be retranslated if we change the locale observable.
It would be better to be able to explicitly define dependencies for knockout computed observables by other means, but I don't know that knockout currently provides such an API either; see the relevant documentation. I also tried using an explicit subscription mechanism, but that wasn't satisfactory since I don't think it's currently possible to trigger a computed to re-run explicitly without changing one of its dependencies. If you drop the computed and use only manual subscription, you end up rewriting knockout itself (try it!), so I prefer to compromise with a computed observable and a dummy read. However bizarre that looks, it might just be the most elegant solution here. Don't forget to warn about the dragons in a comment.
The function is somewhat basic in that it only scans the first-level properties of the options object to determine if they are observable and if so unwraps them (no support for nested objects or arrays). Depending on the localization library you're using, it will make sense to unwrap certain options and not others. Hence, doing it properly would require you to mimic the underlying API in your wrapper.
I'm including this as a side note only because I haven't tested it, but you may want to use the knockout mapping plugin and its toJS method to unwrap your object, which looks like it might be a one-liner.
Here is how you can initialize i18next (most other libraries have a similar setup procedure), for example from your RequireJS data-main script (usually main.js) or your shell view model if you have one:
var ko = require('knockout');
var i18next = require('i18next');
var locale = require('locale');
i18next.init({
lng: locale(),
getAsync: false,
resGetPath: 'app/locale/__ns__-__lng__.json',
});
locale.subscribe(function (value) {
i18next.setLng(value, function () {});
});
This is where we change the global locale setting of the library when our locale observable changes. Usually, you'll bind the observable to a language selector; see the relevant documentation.
i18next-specific note: If you want to load the resources asynchronously, you will run in a little bit of trouble due to the asynchronous aspect of Durandal applications; indeed I don't see an obvious way to wrap the rest of the view models setup code in a callback to init, as it's outside of our control. Hence, translations will be called before initialization is finished. You can fix this by manually tracking whether the library is initialized, for example by setting a variable in the init callback (argument omitted here). I tested this and it works fine. For simplicity here though, resources are loaded synchronously.
i18next-specific note: The empty callback to setLng is an artifact from its old-school nature; the library expects you to always start retranslating strings after changing the language (most likely by scanning the DOM with jQuery) and hence the argument is required. In our case, everything is updated automatically, we don't have to do anything.
Finally, here's an example of how to use the translate function:
var _ = require('translate');
var n_foo = ko.observable(42);
var greeting = _('greeting');
var foo = _('foo', { count: n_foo });
You can expose these variables in your view models, they are simple knockout computed observables. Now, every time you change the locale or the parameters of a translation, the string will be retranslated. Since it's observable, all observers (e.g. your views) will be notified and updated.
var locale = require('locale');
locale('en_US');
n_foo(1);
...
No document reload necessary. No need to explicitly call the translate function anywhere. It just works.
Integration of localization libraries in knockout
You may attempt to make knockout plugins and extenders to add support for localization libraries (besides custom binding handlers), however I haven't explored the idea, so the value of this design is unknown to me. Again, feel free to contribute to this answer.
On Ecmascript 5 accessors
Since these accessors are carried with the objects properties everywhere they go, I suspect something like the knockout-es5 plugin or the Durandal observable plugin may be used to transparently pass observables to APIs that don't support knockout. However, you'd still need to wrap the call in a computed observable, so I'm not sure how much farther that gets us.
Yet again, this is not something I looked at a lot, contributions welcome.
On Knockout extenders
You can potentially leverage KO extenders to augment normal observables to translate them on the fly. While this sounds good in theory, I don't think it would actually serve any kind of purpose; you would still need to track every option you pass to the extender, most likely by manually subscribing to each of them and updating the target by calling the wrapped translation function.
If anything, that's merely an alternative syntax, not an alternative approach.
Conclusion
It feels like there is still a lot lacking, but with a 21-lines module I was able to add support for an arbitrary localization library to a standard Durandal application. For an initial time investment, I guess it could be worse. The most difficult part is figuring it out, and I hope I've done a decent job at accelerating that process for you.
In fact, while doing it right may sound a little complicated (well, what I believe is the right way anyway), I'm pretty confident that techniques like these make things globally simpler, at least in comparison to all the trouble you'd get from trying to rebuild state consistently after a document reload or to manually tracking all translated strings without Knockout. Also, it is definitely more efficient (UX can't be smoother): only the strings that need to be retranslated are retranslated and only when necessary.
Nov 2014 notes
After writing this post, we merged the i18next initialization code and the code from the translate module in a single AMD module. This module had an interface that was intended to mimick the rest of the interface of the stock i18next AMD module (though we never got past the translate function), so that the "KO-ification" of the library would be transparent to the applications (except for the fact that it now recognized KO observables and took the locale observable singleton in its configuration, of course). We even managed to reuse the same "i18next" AMD module name with some require.js paths trickery.
So, if you still want to do this integration work, you may rest assured that this is possible, and eventually it seemed like the most sensible solution to us. Keeping the locale observable in a singleton module also turned out to be a good decision.
As for the translation function itself, unwrapping observables using the stock ko.toJS function was indeed far easier.
i18next.js (Knockout integration wrapper)
define(function (require) {
'use strict';
var ko = require('knockout');
var i18next = require('i18next-actual');
var locale = require('locale');
var namespaces = require('tran-namespaces');
var Mutex = require('komutex');
var mutex = new Mutex();
mutex.lock(function (unlock) {
i18next.init({
lng: locale(),
getAsync: true,
fallbackLng: 'en',
resGetPath: 'app/locale/__lng__/__ns__.json',
ns: {
namespaces: namespaces,
defaultNs: namespaces && namespaces[0],
},
}, unlock);
});
locale.subscribe(function (value) {
mutex.lock(function (unlock) {
i18next.setLng(value, unlock);
});
});
var origFn = i18next.t;
i18next.t = i18next.translate = function (key, opts) {
return ko.computed(function () {
return mutex.tryLockAuto(function () {
locale();
return origFn(key, opts && ko.toJS(opts));
});
});
};
return i18next;
});
require.js path trickery (OK, not that tricky)
requirejs.config({
paths: {
'i18next-actual': 'path/to/real/i18next.amd-x.y.z',
'i18next': 'path/to/wrapper/above',
}
});
The locale module is the same singleton presented above, the tran-namespaces module is another singleton that contains the list of i18next namespaces. These singletons are extremely handy not only because they provide a very declarative way of configuring these things, but also because it allows the i18next wrapper (this module) to be entirely self-initialized. In other words, user modules that require it will never have to call init.
Now, initialization takes time (might need to fetch some translation files), and as I already mentioned a year ago, we actually used the async interface (getAsync: true). This means that a user module that calls translate might in fact not get the translation directly (if it asks for a translation before initialization is finished, or when switching locales). Remember, in our implementation user modules can just start calling i18next.t immediately without waiting for a signal from the init callback explicitly; they don't have to call it, and thus we don't even provide a wrapper for this function in our module.
How is this possible? Well, to keep track of all this, we use a "Mutex" object that merely holds a boolean observable. Whenever that mutex is "locked", it means we're initializing or changing locales, and translations shouldn't go through. The state of that mutex is automatically tracked in the translate function by the KO computed observable function that represents the (future) translation and will thus be re-executed automatically (thanks to the magic of KO) when it changes to "unlocked", whereupon the real translate function can retry and do its work.
It's probably more difficult to explain than it is to actually understand (as you can see, the code above is not overly long), feel free to ask for clarifications.
Usage is very easy though; just var i18next = require('i18next') in any module of your application, then call i18next.t away at any time. Just like the initial translate function you may pass observable as arguments (which has the effect of retranslating that particular string automatically every time such an argument is changed) and it will return an observable string. In fact, the function doesn't use this, so you may safely assign it to a convenient variable: var _ = i18next.t.
By now you might be looking up komutex on your favorite search engine. Well, unless somebody had the same idea, you won't find anything, and I don't intend to publish that code as it is (I couldn't do that without losing all my credibility ;)). The explanation above should contain all you need to know to implement the same kind of thing without this module, though it clutters the code with concerns I'm personally inclined to extract in dedicated components as I did here. Toward the end, we weren't even 100% sure that the mutex abstraction was the right one, so even though it might look neat and simple, I advise that you put some thoughts into how to extract that code (or simply on whether to extract it or not).
More generally, I'd also advise you to seek other accounts of such integration work, as its unclear whether these ideas will age well (a year later, I still believe this "reactive" approach to localization/translation is absolutely the right one, but that's just me). Maybe you'll even find more modern libraries that do what you need them to do out of the box.
In any case, it's highly unlikely that I'll revisit this post again. Again, I hope this little(!) update is as useful as the initial post seems to be.
Have fun!
I was quite inspired by the answers in SO regarding this topic, so I came up with my own implementation of a i18n module + binding for Knockout/Durandal.
Take a look at my github repo
The choice for yet another i18n module was that I prefer storing translations in databases (which ever type required per project) instead of files. With that implementation you simply have a backend which has to reply with a JSON object containing all your translations in a key-value manner.
#RainerAtSpirit
Good tip with the singleton class was very helpful for the module
You might consider having one i18n module that returns a singleton with all required observables. In addition a init function that takes an i18n object to initialize/update them.
define(function (require) {
var app = require('durandal/app'),
i18n = require('i18n'),
router = require('durandal/plugins/router')
};
return{
canDeactivate: function () {
}
}
});
On the view:
<label for="hello-world" data-bind="text: i18n.helloWorldLabelText"></label>
Here is an example repo made using i18next, Knockout.Punches, and Knockout 3 with Durandal:
https://github.com/bestguy/knockout3-durandal-i18n
This allows for Handlebars/Angular-style embeds of localized text via an i18n text filter backed by i18next:
<p>
{{ 'home.label' | i18n }}
</p>
also supports attribute embeds:
<h2 title="{{ 'home.title' | i18n }}">
{{ 'home.label' | i18n }}
</h2>
And also lets you pass parameters:
<h2>
{{ 'home.welcome' | i18n:name }}
<!-- Passing the 'name' observable, will be embedded in text string -->
</h2>
JSON example:
English (en):
{
"home": {
"label": "Home Page",
"title": "Type your name…"
"welcome": "Hello {{0}}!",
}
}
Chinese (zh):
{
"home": {
"label": "家",
"title": "输入你的名字……",
"welcome": "{{0}}您好!",
}
}

Is the use of the mediator pattern recommend?

I am currently reading http://addyosmani.com/resources/essentialjsdesignpatterns/book/#mediatorpatternjavascript
I understand the mediator pattern as some sort of object which sets up publish and subscribe functionality.
Usually I am setting up objects which already provide subscribe(), publish() methods. Concrete Objects extend this base object so that subscribe() and publish() are always registered as prototype attributes.
As I understand right the mediator pattern is used to add the publish-subscribe-methods to an object.
What is the benefit of this practice? Isn't it a better practice to provide a base object with publish and subscribe functions than letting a mediator set up at construction?
Or have I understood the mediator pattern wrong?
As what I have learned from similar posts some time ago:
The mediator pattern provides a standard API for modules to use.
Let's have an example:
Your app's thousands of modules heavily rely on jQuery's $.post. If suddenly, your company had licensing issues and decided to move over to, for example, MooTools or YUI, would your look for all the code that uses $.post and replace them with something like MooTools.post?
The mediator pattern solves this crisis by normalizing the API. What the modules know is that your mediator has a post function that can do AJAX post regardless of what library was used.
//module only sees MyMediator.post and only knows that it does an AJAX post
//How it's implemented and what library is used is not the module's concern
jQuery.post -> MyMediator.post -> module
MooTools.post -> MyMediator.post -> module
YUI.post -> MyMediator.post -> module
The mediator serves as the "middle-man" for intermodule communication.
One problem in newbie JS development is when modules are interdependent. That is when:
MyClassA.something = MyClassB.method();
MyClassB.something = MyClassA.method();
But what if something is wrong in MyClassB and the developer took it out of the build. Would you look for and strip out all code in MyClassA that uses MyClassB so that it does not break due to the absence of MyClassB?
The mediator pattern's publish and subscribe pattern solves this by making the module subscribe to an event instead of directly interfacing with the other modules. The mediator acts as a collection of callbacks/subscriptions that are fired when events are published.
This "anonymous" subscribing results in partial loose-coupling. Modules would still need to know which modules to listen to or at least a set of events to listen to, but they are connected in a way that won't result in breakage if any one of them is taken out. All they know is that they subscribed to the event and will execute when that event happens - regardless of who fires it, if it fires at all, or if the trigger exists.
You can achieve mediation without using eventing (pub/sub).
In complex/sophisticated flows, it can be challenging to debug or reason about code that is purely event driven.
For an example on how you can create a mediator without pub/sub, you can take a look at my project jQueryMediator:
https://github.com/jasonmcaffee/jQueryMediator

Categories

Resources