How can I make this pub/sub code more readable? - javascript

I am investigating the pub/sub pattern because I am reading a book that highly advocates event driven architecture, for the sake of loose coupling. But I feel that the loose coupling is only achieved by sacrificing readability/transparency.
I'm having trouble understanding how to write easily-understood pub/sub code. The way I currently write my code results in a lot of one-to-one channels, and I feel like doing so is a bad practice.
I'm using require.js AMD modules, which means that I have many smaller-sized files, so I feel like it would be very difficult for someone to follow the flow of my publishes.
In my example code below, there are three different modules:
The UI / Controller module, handling user clicks
A translator module
A data storage module
The gist is that a user submits text, it gets translated to english, then stored into a database. This flow is split into three modules in their own file.
// Main Controller Module
define(['pubSub'] function(pubSub) {
submitButton.onclick(function() {
var userText = textArea.val();
pubSub.publish("userSubmittedText", userText);
});
});
// Translator module
define(['pubSub'] function(pubSub) {
function toEnglish(text) {
// Do translation
pubSub.publish("translatedText", translatedText);
};
pubSub.subscribe("userSubmittedText", toEnglish);
});
// Database module
define(['pubSub'] function(pubSub) {
function store(text) {
// Store to database
};
pubSub.subscribe("translatedText", store);
});
For a reader to see the complete flow, he has to switch between the three modules. But how you would make clear where the reader should look, after seeing the first pubSub.publish("userSubmittedText", userText);?
I feel like publishes are like a cliff hanger, where the reader wants to know what is triggered next, but he has to go and find the modules with subscribed functions.
I could comment EVERY publish, explaining what modules contain the functions that are listening, but that seems impractical. And I don't think that is what other people are doing.
Furthermore, the above code uses one-to-one channels, which I think is bad style, but I'm not sure. Only the Translator module's toEnglish() function will ever subscribe to the pubSub channel "userSubmittedText", yet I have to create the new channel for what is basically a single function call. While this way my Controller module doesn't have to have Translator as a dependency, it just doesn't feel like true decoupling.
This lack of function flow transparency is concerning to me, as I have no idea how someone reading such source code would know how to follow along. Clearly I must be missing something important. Maybe I'm not using a helpful convention, or maybe my publish event names are not descriptive enough?
Is the loose coupling of pub/sub only achieved by sacrificing of flow transparency?

The idea of the publish subscribe pattern is that you don't make any assumptions about who has subscribed to a topic or who is publishing. From Wikipedia (http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern):
[...] Instead, published messages are characterized into classes,
without knowledge of what, if any, subscribers there may be.
Similarly, subscribers express interest in one or more classes, and
only receive messages that are of interest, without knowledge of what,
if any, publishers there are.
If your running code doesn't make any assumptions, your comments shouldn't do either. If you want a more readable way of module communication, you can use requirejs' dependency injection instead, which you already do with your pubsub module. This way, you could make it easier to read the code (which brings other disadvantages). It all depends on what you want to achieve...

Related

What are "use-cases" in the Clean Architecture?

I am trying to implement the Clean Architecture structure in an app that I am developing and I am having a hard time figuring out exactly what is what.
For example, if I am right, the entities of my appliaction are Employee, Department, EmployeeSkill the entities also include all of the "validation" logic, to ensure that these entities are valid.
And the use-cases are the various actions that I can do with these entities?
For example, use-cases about the Employee:
add-employee.js
remove-employee-by-id.js
update-employee-department.js
update-employee-phone-number.js
...and-more-employee-updates.js
Are these all actually use-cases?
Now the add and remove i dont think have much to discuss about, but what about the updates? Should they be granulated like this?
Also with such architecture, doesnt that mean that, I if I want to update both the employees department and phone number at the same time, I will have to make two separate calls to the database, for something that can be done with one, because the database adapter is being injected into the use case, and every use case starts with "finding" the entity in the database?
Defer thinking about the entities for now. Often, you get stuck trying to abstract the code after your mental model of the world and that is not as helpful as we are lead to believe.
Instead, couple code that change together for one reason into use-cases. A good start can be one for each crud operation in the GUI. what will be by a new method, or new parameter, or new class etc is not part of the CA pattern, that is part of the normal tradeoffs you face when you write code.
I can't see any entities from your example. In my code base ContactCard (I work on an yellow pages in 2021 kind of app) and UserContext (security) are the only entities, these two things are used all over the place.
Other things are just data holders and not really an entity. I have many duplicates of the data holders so that things that are not coupled, stay uncoupled.
Your repository should probably implement the bridge pattern. That means that the business logic defines a bridge, that some repository implements. The use-case is not aware of database tables, so it does not have any granular requirements (think if it as ordering food at mcdonalds, it won't say from the grill I want xxx, and from the fryer I want yyy).
The use case is very demanding in the bridge definition. So much, that many repositories end up having api layers that import and manage the implementation of the bridge, and then they adapters to the internal logic.
This is the difference between api layers in business apps and most B2C apis. An enterprise API for an use case, is the just what the use-case needed.
If you have already constrained yourself by a made up model of the world, and decided to split repos in that, instead of splitting them per use-case, then you end up with poor alignment. Having the same sql query, or parts of it, in more than one repository is not an issue. Over time, the queries look different a lot of the time even if they start out very similar.
I would call your example use-case for UpdatePhoneNumberEverywhere. And then the UpdatePhoneNumberEverywhereRepository implementation, can do what the heck it wants to, that is a detail. The use case does not care.
Another one I might do is UpdatePhoneNumber and the use-case accepts a strategy. Strategy.CASCADE or Strategy.Leaf etc.
In terms of your table design, even though it's a detail, Contacts probably deserves to be broken out.
Every use-case does not start with finding something from the database. Commands and queries are passed in or w/e you call it, and the use-case does something useful.
The most practical way to write a use-case, is to just implement exactly what you need for the business requirement, write all the tests against the public api in the use-case. Just pass in data as a start, a dictionary is often fine.
Entities are often found later when you cannot stand to have so many versions of something, you just need that something to be stable and the same all over, and your end users expects as much too. Then, just refactor.

Creating Cycle.js reusable modules

Let's imagine, in a OO world, I want to build a Torrent object which listens to the network and lets me interact with it. It would inherit an EventEmitter and would look something like this:
var torrent = new Torrent(opts)
torrent.on('ready', cb) // add torrent to the UI
torrent.on('metadata', cb) // update data in the UI
and I can also make it do things:
torrent.stop()
torrent.resume()
Then of course if I want to do delete the torrent from memory I can call torrent.destroy().
The cool thing about this OO approach is that I can easily package this functionality in its own npm module, test the hell out of, and give users a nice clean reusable API.
My question is, how do I achieve this with Cycle.js apps?
If I create a driver it's unclear how I would go about creating many torrents and having their own independent listeners. Also consider I'd like to package functionality in a way that others get to easily reuse it in other Cycle.js apps.
It seems to me that you are trying to solve a problem thinking about it as you would write "imperative code".
I think creating Torrent instances with their own listeners is not something you should be using in cycle components.
I would go about it differently - creating Torrent module and figuring out what would be its sources and sinks. If this module should be reusable and published, you can create it as a function that would receive streams as arguments. Maybe something similar to TodoMVC Task component (which is then used in its parent component).
Since this module can be created as a pure function, testing it should be at least just as easy.
This implementation of course depends on your requirements but communication with the module would then be done only with streams and since it would be declarative there would be no need for methods like stop() and destroyed() which you would call from elsewhere.
How do I test it?
In cycle.js you'd write a component with intent model and view functions.
You'd test intent(), for given input Streams, produces Streams of actions that you want. For models, you'd test that given http and action streams, you get the state you want, and for view, you test that given a state you get the VDom you want.
One tricky bit with cycle.js is that since it passes functions around, normal JavaScript objects that use the 'this' keyword are not worth the trouble due to 'this' context problems. If you are working with cycle.js and you think you might write a JS class for use with Isolate, Onionify, or Collections most likely, you are going in the wrong direction. See MDN docs about 'this'
how I would go about creating many torrents
The Cycle.js people have several ways to deal with groups of things like this.
This ticket describes some things that might work for that:
Wrap subapp in Web Component
Stanga and similars.
Cycle Collections
Cycle Onionify

In JavaScript, is it a good practice to depend on object references to "listen" for changes?

I have to implement a repository pattern-like object that will maintain a list of items. The repo would look something like this:
var repository = {
data: [],
getAll: function() {
return this.data;
},
update: function() { ... }
}
The end consumer of the repo's data would be some component. I am thinking to exploit the reference to the repo's data array in order to update the DOM whenever it changes:
function ItemList() {
this.data = repository.getData();
when (this.data is changed) {
update the view
}
this.userInput = function() {
repository.update();
}
}
While it feels neat and supposedly uses a legit functionality, is it really a good idea? Should I use observer/notifications in the repository instead?
var repository = {
...
onDataChange: function(callback) { ... }
}
An example (using Angular) you can find here: http://jsfiddle.net/xen8m148/
depends on how you implement that absolute not-built-in "is changed" =) Generally if you want to keep spurious processing down, a publish/subscribe model is better. If you don't care about wasted cpu cycles, then you can use Object.observe to look at object changes.
From a software engineering point of view, though, it looks like you're sharing your data between two owners, and that -rather than how you're listening for changes- is a potentially much bigger problem in the future.
I generally think a pub/sub pattern would be unnecessary if you are communicating with a repository. Assuming your application is using a repository to abstract CRUDs to an underlying persistent mechanism for all data in the system, then this repository is actually an important piece of the architecture.
The pub/sub pattern is useful if the communicating modules do not know about one another's existence. If a module/object is trying to communicate with an architectural object like a repository, it already knows about repository by convention. If the repository already serves as an interface, an abstraction of an underlying complexity, why would you want your modules to not know about this interface?
However, if you are unsure about a repository being part of your architecture, or if you simply want the data and repository to be totally decoupled to the maximum degree, then yes a pub/sub would work.
That said, I believe for a front-end interpreted language like JS, the more straightforward approach is usually the better approach. I use pub/sub for component communication all the time, but I just don't see the need for them for communication between architectural layers, where interfaces would suffice. In fact, if you take the pub/sub pattern too far, you may end up with a namespace problem in your pub-sub messages.

JavaScript Publish / Subscribe Pattern: Showing Chain of Events?

Say I am using a pub/sub pattern with highly modularized code. When a function in one module sends out a 'publish', how I do make clear what functions in other modules are subscribing to that trigger?
For example:
// In module 1
function foo(data) {
publish("trigger", data);
}
// In module 2
function bar(data) {}
subscribe("trigger", bar);
// In module 3
function baz(data) {}
subscribe("trigger", baz);
After reading module 1 and seeing that a 'publish' is been sent out, how would someone know where to look in my code for the subscribed callbacks?
An obvious solution might be to comment what modules contain functions that subscribe to the trigger, but that seems an impractical solution when dealing with a large number of publishes / subscribers.
I feel like I'm not fully understanding how to use the pub/sub pattern, since to me, the pattern seems to have no transparency whatsoever regarding function chains.
EDIT
My question pertains to making my code clear and easy to understand for someone reading my source code. I understand that during runtime, I could programmatically find the list of stored subscribers, by access the array of stored callbacks. But that does nothing for making my raw source code more easily understood.
For example, I current use a pattern like this:
// Main Controller Module
function foo(data) {
module2.bar();
module3.bar();
}
// In module 2
function bar(data) {}
// In module 3
function baz(data) {}
For starters, what is the proper term for this module? I thought it was a 'mediator' pattern, but looking here, it seems a mediator pattern is more like what I thought a pub/sub was?
With this pattern I feel the flow of my code is completely transparent. The reader doesn't need to dig around to find out what functions in other modules foo() might call.
But with the pub/sub pattern, once I send out the publish from foo(), it's like the reader has to somehow find the modules where the subscribed functions are.
But of course the downside of the above pattern is heavy dependency: module 1 needs both module 2 and 3 injected before it can call bar() and baz().
So I want to adopt the loose coupling of the pub/sub pattern, but I also want to keep the function flow transparency that the above pattern gives me. Is this possible? Or is this just the inherent trade-off of a pub/sub pattern?
To Mod:
Please delete question. I wrote this question poorly and would like to re-ask the question in a clearer manner. thanks.
I thought the whole idea of publish subscribe or mediator is to loosely couple objects. Object1 doesn't need to know what happens who does what it is only concerned doing it's own thing and notifying whoever is interested that it's done doing what it does.
I register listeners only in a controller class and not all over the code. When the controller needs to do add or remove listeners then break up your process in steps that will inform the controller first (create appropriate events for it).
For example:
We fetch data with XHR.
Based on the data we create processors, processors are created with factory.
Processors process data.
Data is displayed.
Process is finished.
In your controller you could have:
var Controller = {
//fetch information and display it
fetch : function(paramObj){
var subscribeIds = [];
//to have xhr listen to fetch can be done in an init function
// no need to add and remove every time but make a note here
// that it's registered in init and part of this process
subscribeIds.push(Mediator.subscribe(xhr.fetch,"fetch"));
//xhr will trigger dataFetched
subscribeIds.push(Mediator.subscribe(Controller.initProsessor,"dataFetched"));
//Controller will trigger displayFetched
subscribeIds.push(Mediator.subscribe(dom.displayFetched,"displayFetched"));
subscribeIds.push(Mediator.subscribe(Controller.displayedFetched,"displayedFetched"));
paramObj.suscribeIds = subsribeIds;
Mediator.trigger("fetch",paramObj);
},
initProsessor : function(paramObj){
var processor = Processor.make(paramObj.data.type);
paramObj.html = processor.process(data);
Mediator.trigger("displayFetched",paramObj);
},
displayedFetched : function(paramObj){
//You can decide the process is done here or take other steps
// based on paramObj
//You can unsubscribe listeners or leave them, when you leave them
// they should not be registered in the fetch function but rather
// in an init function of Controller with comments saying
// basic fetch procedure
Controller.cleanupListeners(paramObj.subscribeIds);
},
cleanupListeners : function(listenersIds){
Mediator.unSubscribe(listenersIds);
}
}
The code looks more complicated than needs to be. Someone looking at it may think why not let XHR make a Processor instance and tell it to process? The reason is that the Controller literally controls the flow of the application, if you want some other things to happen in between you can add them. As your application grows you'll add more and more processes and sometimes re factor functions to do less specific things so they can be better re used. Instead of possibly having to change your code in several files you now only re define the process(es) in the Controller.
So to answer your question as to where to find the listeners and where events are registered: in the controller.
If you have a single Mediator object you can have it dump listeners at any time, just write a dump method that will console.log the event names and functions.toString().

How to implement the Javascript mediator (publish-subscribe) pattern

Background
We have a fairly complex Silverlight client which we are rewriting in HTML/Javascript/CSS, built on top of the same web services. Actually we have two Silverlight different clients which we are porting, which share some common functionality.
I read the article on http://addyosmani.com/largescalejavascript/ and am planning to use the proposed architecture, and in particular the mediator pattern.
A 10000 feet overview of the pattern described by Addy:
code is divided into small Modules
Modules are only aware of a mediator object; Modules cannot communicate directly with other modules
the mediator has a simple interface for publishing and subscribing to messages
Modules can subscribe to messages (through the mediator API), giving a callback function
Modules can publish messages to the mediator, with a parameter object, and the mediator calls the callback method of any modules subscribed to the message, passing the parameter object
One of the main goals here is to achieve loose coupling between modules. So we can reuse the modules in the two clients. And test the modules in isolation. And the mediator should be the only global object we need, which has got to be good.
But although I like the idea, I have the feeling it is overly complicated in some cases, and that some of my team members will not be convinced. Let me explain by example:
Assume we have a helper function which performs a calculation - lets say it formats a string - and assume this function should be available to any module. This function could belong in a 'tools' or 'helper' module which is then reusable and testable.
To call this function from an arbitrary module I have to post a message, something like formatString with my input string as parameter. And the helper function has subscribed to the formatString message. But before I post the formatString message I first have to subscribe to a message like formatStringResult, with a callback function which can receive the result. And then once I get the result back, I unsubscribe from the formatStringResult message.
Question(s)
Should the mediator rather offer this type of helper functionality directly in it's own interface?
Or should I extend the publish interface to allow an optional result parameter, where helper methods can directly write a result?
Is the tradeoff of having an extra mediator layer really worth the
benefit of achieving loose coupling?
I'd really appreciate advice from developers with experience of achieving loose-coupling in 'complex' JavaScript applications.
You actually perfectly described the BarFoos application Framework:
https://github.com/jAndreas/BarFoos
I don't think that mediator is the pattern what you are looking for, at least not for what you described.
Just think of 2 object triggering formatString the same time. What each would get back in their formatStringResult?
Mediator is for broadcasting events to everyone who is listening. Publishers don't want to broadcast requests (e.g. formatString) rather want to notify others about a change in their own state. Note how the source and consumer of the information is different. Having a mediator means those parties don't have to have a reference to each other to communicate, thereby its lowering the coupling.

Categories

Resources