I am currently reading http://addyosmani.com/resources/essentialjsdesignpatterns/book/#mediatorpatternjavascript
I understand the mediator pattern as some sort of object which sets up publish and subscribe functionality.
Usually I am setting up objects which already provide subscribe(), publish() methods. Concrete Objects extend this base object so that subscribe() and publish() are always registered as prototype attributes.
As I understand right the mediator pattern is used to add the publish-subscribe-methods to an object.
What is the benefit of this practice? Isn't it a better practice to provide a base object with publish and subscribe functions than letting a mediator set up at construction?
Or have I understood the mediator pattern wrong?
As what I have learned from similar posts some time ago:
The mediator pattern provides a standard API for modules to use.
Let's have an example:
Your app's thousands of modules heavily rely on jQuery's $.post. If suddenly, your company had licensing issues and decided to move over to, for example, MooTools or YUI, would your look for all the code that uses $.post and replace them with something like MooTools.post?
The mediator pattern solves this crisis by normalizing the API. What the modules know is that your mediator has a post function that can do AJAX post regardless of what library was used.
//module only sees MyMediator.post and only knows that it does an AJAX post
//How it's implemented and what library is used is not the module's concern
jQuery.post -> MyMediator.post -> module
MooTools.post -> MyMediator.post -> module
YUI.post -> MyMediator.post -> module
The mediator serves as the "middle-man" for intermodule communication.
One problem in newbie JS development is when modules are interdependent. That is when:
MyClassA.something = MyClassB.method();
MyClassB.something = MyClassA.method();
But what if something is wrong in MyClassB and the developer took it out of the build. Would you look for and strip out all code in MyClassA that uses MyClassB so that it does not break due to the absence of MyClassB?
The mediator pattern's publish and subscribe pattern solves this by making the module subscribe to an event instead of directly interfacing with the other modules. The mediator acts as a collection of callbacks/subscriptions that are fired when events are published.
This "anonymous" subscribing results in partial loose-coupling. Modules would still need to know which modules to listen to or at least a set of events to listen to, but they are connected in a way that won't result in breakage if any one of them is taken out. All they know is that they subscribed to the event and will execute when that event happens - regardless of who fires it, if it fires at all, or if the trigger exists.
You can achieve mediation without using eventing (pub/sub).
In complex/sophisticated flows, it can be challenging to debug or reason about code that is purely event driven.
For an example on how you can create a mediator without pub/sub, you can take a look at my project jQueryMediator:
https://github.com/jasonmcaffee/jQueryMediator
Related
I am working on an AngularJS application that has some self-registering components. Concretely, there are some services that do not directly offer any public interface themselves; they merely register some internal objects in some directory provided by another service.
You can imagine the body of such a service's factory as follows:
var internalThing = {
// members ...
};
thingRegistryService.registerThing(internalThing);
return {};
Thus, I only need to ensure that the service gets loaded at some point.
The dilemma I'm facing is as follows: As the service provides no public functions and just needs to "be there", there is no reason to inject it anywhere. As it does not get injected, it never gets instantiated. As it never gets instantiated, the components within the service never register themselves, though.
I can inject the service the usual where in some service or controller that I know will get loaded - but then, I am basically leaving an unused argument in the code (which, if it is the last argument in the list, will even get outlined as an error based on the project's JSHint settings).
Alternatively, I can do the self-registration in a method in the service and call that wherever I inject the service. This would make the service injection "useful", but in turn I'd have to deal with multiple calls myself instead of relying on the built-in singleton mechanism of AngularJS's services.
Or ... should I go yet another route, by providing a method like registerThing somewhere that takes the service name as a string, and that will internally just invoke $injector.get? Of course, that evokes the question again where the correct place to put that kind of call would be.
A little background: This is part of a large project developed in a large team. Build and/or deployment magic somehow handles that any JavaScript code file committed to our VCS by any developer will be available to AngularJS's dependency injection. Thus, any JavaScript that needs to be added has to be provided as some kind of an AngularJS service, controller, etc.
What is the proper way to load such a self-registering service?
Right place to init your module is angular.module('yourModule').run block.
In case of "self-registering" I think it is better to have some implicit method for this:
angular.module('yourModule').run(['service'], function (service) {
service.init();
})
If your build system magically provides all JS code, is an import needed?
A self-registering architecture as your build system suggests should not require imports. The import JSHint errors are the cost of this magic.
I've used a similar behavior with namespace-like design. It can be used for self-registering techniques, although imports get tricky and does not work well with ES6 modules.
This is close to my point: http://blog.assaf.co/automating-component-registration-in-angularjs/
Let's imagine, in a OO world, I want to build a Torrent object which listens to the network and lets me interact with it. It would inherit an EventEmitter and would look something like this:
var torrent = new Torrent(opts)
torrent.on('ready', cb) // add torrent to the UI
torrent.on('metadata', cb) // update data in the UI
and I can also make it do things:
torrent.stop()
torrent.resume()
Then of course if I want to do delete the torrent from memory I can call torrent.destroy().
The cool thing about this OO approach is that I can easily package this functionality in its own npm module, test the hell out of, and give users a nice clean reusable API.
My question is, how do I achieve this with Cycle.js apps?
If I create a driver it's unclear how I would go about creating many torrents and having their own independent listeners. Also consider I'd like to package functionality in a way that others get to easily reuse it in other Cycle.js apps.
It seems to me that you are trying to solve a problem thinking about it as you would write "imperative code".
I think creating Torrent instances with their own listeners is not something you should be using in cycle components.
I would go about it differently - creating Torrent module and figuring out what would be its sources and sinks. If this module should be reusable and published, you can create it as a function that would receive streams as arguments. Maybe something similar to TodoMVC Task component (which is then used in its parent component).
Since this module can be created as a pure function, testing it should be at least just as easy.
This implementation of course depends on your requirements but communication with the module would then be done only with streams and since it would be declarative there would be no need for methods like stop() and destroyed() which you would call from elsewhere.
How do I test it?
In cycle.js you'd write a component with intent model and view functions.
You'd test intent(), for given input Streams, produces Streams of actions that you want. For models, you'd test that given http and action streams, you get the state you want, and for view, you test that given a state you get the VDom you want.
One tricky bit with cycle.js is that since it passes functions around, normal JavaScript objects that use the 'this' keyword are not worth the trouble due to 'this' context problems. If you are working with cycle.js and you think you might write a JS class for use with Isolate, Onionify, or Collections most likely, you are going in the wrong direction. See MDN docs about 'this'
how I would go about creating many torrents
The Cycle.js people have several ways to deal with groups of things like this.
This ticket describes some things that might work for that:
Wrap subapp in Web Component
Stanga and similars.
Cycle Collections
Cycle Onionify
So I have done a lot of research and for some reason I can't find an implementation of the Event Aggregator pattern in Javascript. In fact, the only language that's always used is C# and there's always generics being used. It's a very useful pattern so I fail to realize why it only seems to be 'meant' for .NET. I was hoping that someone would be able to provide an implementation in Javascript or at the very least Java and NOT C# (I've seen enough of that). Thank you!
How to:
Get one on the many general purpose publish/subscribe libraries that are implemented and ready to use. ie https://github.com/mroderick/PubSubJS (or roll your own - it ain't that hard)
Instantiate your event source objects, implement publishing of events.
Instantiate your aggregator, make it subscribe to your source objects, and offer publishing of received events.
Instantiate your target objects, make them subscribe to your aggregator.
In Javascript the Event Aggregator pattern does not need its own implementation. It is just an object that subscribes to multiple publishers and also publishes to multiple subscribers.
Since there are no type checking or interfaces anything of that sort, you dont need the pattern implemented before you use it, it is just a trivial exercise in pub/sub, which is probably why you can't find it anywhere as an "abstract" implementation.
Look into redux if you want to see something reusable that solves problems in the same domain as the event aggregator pattern, but offers a lot more.
i'm studying how nodejs module system works.
I've found so far this literature:
https://nodejs.org/api/modules.html
http://fredkschott.com/post/2014/06/require-and-the-module-system/
http://www.bennadel.com/blog/2169-where-does-node-js-and-require-look-for-modules.htm
It helps me to understand a few points however is still have these questions:
If i have a expensive resource (let's say database pooling connection) inside a module, how can i make sure that by requiring the module again i am not re-evaluating the resource?
How can i dynamically unload a disposable resource once i already called the 'require' on it?
It's important because i have some scenarios that demands me to make sure that i have a single instance of database pool. because of this i'm exporting modules which are able to receive parameters instead of just require the expensive resource.
Any guidance is very appreciated.
Alon Salont has written an EXCELLENT guide to understanding exports in NodeJS (which is what you're accessing when you call require()) here:
http://bites.goodeggs.com/posts/export-this/#singleton
If you study the list of options for what a module can export, you'll see that the answer to your question depends on how the module is written. When you call require, NodeJS will look for the module loaded in its cache and return it if it already had it loaded elsewhere.
That means if you choose to export a Singleton pattern, are monkey-patching, or creating global objects (I recommend only the first one in your case), only ONE shared object will be created / will exist. Singleton patterns are good for things like database connections that you want to be shared by many modules. Although some argue that "injecting" these dependencies via a parent/caller is better, this is a philosophical view not shared by all, and singletons are widely used by software developers for shared-service tasks like this.
If you export a function, typically a constructor, require() will STILL return just one shared reference to that. However, in this case, the reference is to a function, not something the function returns. require() doesn't actually CALL the function for you, it just gives you a reference to it. To do any real work here, you must now call the function, and that means each module that requires this thing will have its own instance of whatever class the module provides. This pattern is the more traditional one where a class instance is desired / is the goal. Most NPM modules fit this category, although that doesn't mean a singleton is a bad idea in your case.
Background
We have a fairly complex Silverlight client which we are rewriting in HTML/Javascript/CSS, built on top of the same web services. Actually we have two Silverlight different clients which we are porting, which share some common functionality.
I read the article on http://addyosmani.com/largescalejavascript/ and am planning to use the proposed architecture, and in particular the mediator pattern.
A 10000 feet overview of the pattern described by Addy:
code is divided into small Modules
Modules are only aware of a mediator object; Modules cannot communicate directly with other modules
the mediator has a simple interface for publishing and subscribing to messages
Modules can subscribe to messages (through the mediator API), giving a callback function
Modules can publish messages to the mediator, with a parameter object, and the mediator calls the callback method of any modules subscribed to the message, passing the parameter object
One of the main goals here is to achieve loose coupling between modules. So we can reuse the modules in the two clients. And test the modules in isolation. And the mediator should be the only global object we need, which has got to be good.
But although I like the idea, I have the feeling it is overly complicated in some cases, and that some of my team members will not be convinced. Let me explain by example:
Assume we have a helper function which performs a calculation - lets say it formats a string - and assume this function should be available to any module. This function could belong in a 'tools' or 'helper' module which is then reusable and testable.
To call this function from an arbitrary module I have to post a message, something like formatString with my input string as parameter. And the helper function has subscribed to the formatString message. But before I post the formatString message I first have to subscribe to a message like formatStringResult, with a callback function which can receive the result. And then once I get the result back, I unsubscribe from the formatStringResult message.
Question(s)
Should the mediator rather offer this type of helper functionality directly in it's own interface?
Or should I extend the publish interface to allow an optional result parameter, where helper methods can directly write a result?
Is the tradeoff of having an extra mediator layer really worth the
benefit of achieving loose coupling?
I'd really appreciate advice from developers with experience of achieving loose-coupling in 'complex' JavaScript applications.
You actually perfectly described the BarFoos application Framework:
https://github.com/jAndreas/BarFoos
I don't think that mediator is the pattern what you are looking for, at least not for what you described.
Just think of 2 object triggering formatString the same time. What each would get back in their formatStringResult?
Mediator is for broadcasting events to everyone who is listening. Publishers don't want to broadcast requests (e.g. formatString) rather want to notify others about a change in their own state. Note how the source and consumer of the information is different. Having a mediator means those parties don't have to have a reference to each other to communicate, thereby its lowering the coupling.