On using mixins in a CoffeeScript game engine - javascript

I am working on a CoffeeScript game engine for Html5 canvas. I came up with the "cool" idea to utilize mixins after I checked a very neat CoffeeScript implementation. I thought, it may be a very cool idea to reduce the various hierarchy of objects that game objects usually provide, by developing a set of mixin-based components, each of which has a very specific functionality. Then, when developing an actual game, one could build unique game objects on the fly by basically starting from one component and mixing it with a bunch of other components. This reduces the hierarchies and allows for frequent changes.
Then I thought about the possible collisions that might come up, for example having a few components define a method with the same signature. Now, I am not as excited as before.
What should I do? Is this a good way? I still like it, especially because of JS' underlying prototype mechanism, which allows for such an easy way to combine stuff on the fly.

You're talking about an entity component system. There are a couple written in JS; the most popular is Crafty, which is big but worth looking at. I recently wrote one in CoffeeScript (just for funsies; will probably never release it).
A few notes about collisions:
So first, the problem may be worse than you think: collisions will happen if two methods have the same name; JS doesn't differentiate function signatures. It also might not be so bad: why don't you just create a namespacing convention, where each behavior (meaning method) is named after the component it belongs to, like burnable_burn?
To take a step back though, mixins aren't the only way to build this - behaviors (i.e. things a component can do) don't have to be methods at all. The motivating question I ask is, how do you trigger a behavior? For example, you might do:
if entity.hasComponent "burnable" #hasComponent provided by your framework
entity.burn()
But that doesn't sound right to me; it creates a weird coupling between what's happening in your game and what components you have, and it's awkward to check if your entities implement the relevant component. Instead, I'd like behaviors to be listeners on events:
entity.send("applySeriousHeat") #triggers whatever behaviors are there
And then have your component do whatever it needs to do. So when you add a component to an entity, it registers listeners to events. Maybe it looks like (just sketching):
register: (entity) -> #called when you add a component to an entity
entity.listen "applySeriousHeat", -> #thing I do when this event is sent to me
#do burnination here
To bring that point home, if you do that, you don't care about collisions, because your behaviors don't have names. In fact, you want "collisions"; you want the ability to have more than one component respond to the same event. Maybe it burns and melts at the same time?
In practice, I used both setups together. I made entity.addComponent mix in the component's functions, since it's occasionally convenient to just call a behavior as a method. But mostly, the components declare listeners that call those methods, which helped with decoupling and reduced the awkwardness of having to use scoped names, since I don't call them directly in most cases.

Related

Creating Cycle.js reusable modules

Let's imagine, in a OO world, I want to build a Torrent object which listens to the network and lets me interact with it. It would inherit an EventEmitter and would look something like this:
var torrent = new Torrent(opts)
torrent.on('ready', cb) // add torrent to the UI
torrent.on('metadata', cb) // update data in the UI
and I can also make it do things:
torrent.stop()
torrent.resume()
Then of course if I want to do delete the torrent from memory I can call torrent.destroy().
The cool thing about this OO approach is that I can easily package this functionality in its own npm module, test the hell out of, and give users a nice clean reusable API.
My question is, how do I achieve this with Cycle.js apps?
If I create a driver it's unclear how I would go about creating many torrents and having their own independent listeners. Also consider I'd like to package functionality in a way that others get to easily reuse it in other Cycle.js apps.
It seems to me that you are trying to solve a problem thinking about it as you would write "imperative code".
I think creating Torrent instances with their own listeners is not something you should be using in cycle components.
I would go about it differently - creating Torrent module and figuring out what would be its sources and sinks. If this module should be reusable and published, you can create it as a function that would receive streams as arguments. Maybe something similar to TodoMVC Task component (which is then used in its parent component).
Since this module can be created as a pure function, testing it should be at least just as easy.
This implementation of course depends on your requirements but communication with the module would then be done only with streams and since it would be declarative there would be no need for methods like stop() and destroyed() which you would call from elsewhere.
How do I test it?
In cycle.js you'd write a component with intent model and view functions.
You'd test intent(), for given input Streams, produces Streams of actions that you want. For models, you'd test that given http and action streams, you get the state you want, and for view, you test that given a state you get the VDom you want.
One tricky bit with cycle.js is that since it passes functions around, normal JavaScript objects that use the 'this' keyword are not worth the trouble due to 'this' context problems. If you are working with cycle.js and you think you might write a JS class for use with Isolate, Onionify, or Collections most likely, you are going in the wrong direction. See MDN docs about 'this'
how I would go about creating many torrents
The Cycle.js people have several ways to deal with groups of things like this.
This ticket describes some things that might work for that:
Wrap subapp in Web Component
Stanga and similars.
Cycle Collections
Cycle Onionify

React Flux: Store dependencies

so I have recently playing with React and the Flux architecture.
Let's say there are 2 Stores A and B. A has a dependecy on B, because it needs some value from B. So everytime the dispatcher dispatches an action, first B.MethodOfB gets executed, then A.MethodOfA.
I am wondering what are the advantages of this architecture over registering A as a listener of B and just executing the A.MethodOfA everytime B emits a change event?
Btw: Think about a Flux implementation without the "switch case" of the example dispatcher from facebook!
The problem with an evented approach is that you don't have a guarantee as to which handlers will handle a given event first. So in a very large, complex app, this can turn into a tangled web where you're not really sure what is happening when, which makes dependency management between stores very difficult.
The benefits of the callback-based dispatcher are twofold: the order in which stores update themselves is declared in the stores that need this ordering, and it is also guaranteed to work exactly as intended. And this is one of the primary purposes of Flux -- getting the state of an app to be predictable, consistent and stable.
In a very small app that is guaranteed to not grow or change over time, I can't argue with what you are suggesting. But small apps have a tendency to grow into large ones, eventually. This often happens before anyone realizes it's happening.
There are certainly other approaches to Flux. Quite a few different implementations have been created and they have different approaches to this problem. However, I'm not sure which of these experiments scale well. On the other hand, the dispatcher in Facebook's Flux repo and the approach described in the documentation has scaled to truly gigantic applications and is quite battle tested.
In my opinion I think this dispatcher is somehow an anti-pattern.
In distributed architectures based on event sourcing OR CQRS, autonomous components do not have to depend on each others as they share a same event log.
It's not because you are on the same host (the browser/mobile device) that you can't apply these concepts. However having autonomous stores (no store dependencies) means that 2 stores on the same browser will probably have duplicate data as the same data may be needed by 2 different stores. This is a cost to pay but I think in the long term it has benefits as it removes the store dependencies. This means that you can refactor one store entirely without any impact on components that do not use that store.
In my case, I use such a pattern and create some kind of autonomous widgets. An autonomous widget is:
A store that listen to an event stream
A component
A LESS file (maybe useless now because of React Styles?)
The advantage of this is that if there is a bug on a given widget, the bug almost never involve any other file than the 3 mentionned above ;)
The drawback is that stores that host the same data must also maintain it. On some events, many stores may have to do the same action on their local data.
I think this approach scales better to larger projects.
See my insights here: Om but in javascript

Best practices for when to use Pub/Sub vs Dependency Injection

In most modern JS frameworks, the best practice for loosely coupling your UI components is with some implementation of Pub / Sub.
The question I have is doesn't this make debugging, maintaining your app more difficult, whereas dependency injection could achieve the same result (loose coupling)?
For example, my component should open a dialog when clicked. This needs to happen or else the UI will appear broken. To me it seems more readable to have the component make an explicit call to some dialog service. In the wild, I see this problem solved more with pub sub, so maybe I am missing something.
When using both methods together, where is a good place to draw the line when something should fire an event or fulfill that action with an injected service?
Pub-sub is great for application-wide events where the number of potential subscribers can vary or is unknown at the moment of raising an event.
Injection always sets the relation between two, sure, you can create decorators/composites and inject compound objects made of simple objects but it gets messy the moment you start to do it.
Take a tree and a list for example. When you click a node, the list should be rebuilt. Sounds like injection.
But then you realize that some nodes trigger other actions, headings are updated, processes are triggered in the background etc. Raising an event and handling it in many independent subscribers is just easier.
I would say that injection works great through layers, when you compose a view and a controller or a view and its backing storage.
Pub-sub works great between objects in the same layer, for example different independent controllers exchange messages and react accordingly.

Backbone.js - How to handle multiple models with the same id and type? Or how to avoid that situation

I've been running into a few situations recently, where I have multiple views on the page that need to be backed by a single model. This way, just by modifying the model, all of the relevant views will automatically update their appearance. The problem is, in order to achieve this atm, I would need to write code that actively seeks out these models. I will occasionally get lucky and have a model provided to me through an event handler, but usually this isn't the case. I'm looking for a good way to have a single instance of a model on the page at a time, and then just references to that model everywhere else. How do people commonly handle this need? Backbone-relational handles it by keeping a central store using lawnchair.js, and I have heard of people using a single global collection to keep a registry of all of the models.
Short Answer:
One model shared by multiple views (which you could call "duplicate models") is actually a perfectly valid (and useful) Backbone pattern; it's not a problem at all. From what I can tell the problem here is that it's difficult to get that model in to the multiple views, and therefore I'd suggest solving that instead.
Longer Answer:
The problem with passing a model to multiple views is that it inherently couples views together (in order to provide that shared model to each view, the parent view needs to also have that model, as do any "in between" views). Now as programmers we've been taught to keep things as encapsulated as possible, so coupling views might seem "wrong" at first. However, couple views is actually desirable: the key thing is to properly limit which views are coupled.
#Shauna suggested an excellent rule of thumb for this: "a view only cares about itself and creating its children." If you follow this rule, the coupling between your views won't become problematic, and you'll be able to create flexible, maintainable code (far more maintainable than if you were to use global variables, as then you'd really lose encapsulation).
In light of all that, let's examine a quick example. Let's say you have views A, B, and C that all use model X, but you're building A, B and C in totally different places in the code (making it difficult to share X between them).
1) First, I'd look at why A, B, and C are being built in such different spots, and see if I can't move them closer together.
2) If they have to be built so far apart, I'd then look at whether they have something in common I can exploit; for instance, do all those spots all share some related object? If so, then maybe we can put X on that object to share it with all our views.
2) If there's no connection either in code placement or in common variables between A, B, and C, then I'd have to ask "should I really be sharing a model between them at all?"
Boring Story About Author:
When I first started using Backbone, I would often find myself arguing with my co-workers because I wanted to make global variables, and they were firmly against them. I tried to explain to them that if I couldn't use globals I'd have to pass models from view A to view B to view C to view D to view E, when only E would actually need that model. Such a waste!
Except (as I've since learned), I was wrong. Globals are a horrible idea, at least from a maintainability standpoint. Once you start using them, you quickly have no idea what code is affecting what, because you lose the encapsulation you'd normally have between views that use that global variable. And if you're working on a meaningfully large project (ie. one that is worth using Backbone for) encapsulation is one of the only ways to keep your code sane.
Having used Backbone a lot in the time in between, I now firmly believe that the right answer is to pass your model to your view as you create it. This might mean passing models between intermediary views that don't directly use them, but doing that will give you much better, more maintainable code than if you passed models around via globals. As awkward as it may feel at first, passing the models in to the views when you create them is proper practice.
Once you get more familiar with Backbone you will likely find that you only rarely need to pass a model through more than one intermediary view. In fact, you'll likely notice that whenever you find yourself having to pass a model between more than one view that you've detected a "code smell", and that the real issue is that your code needs refactoring.
But as with everything else on Stack Overflow, your mileage may vary ;-)

I want to stop using OOP in javascript and use delegation instead

After dabbling with javascript for a while, I became progressively convinced that OOP is not the right way to go, or at least, not extensively. Having two or three levels of inheritance is ok, but working full OOP like one would do in Java seems just not fitting.
The language supports compositing and delegation natively. I want to use just that. However, I am having trouble replicating certain benefits from OOP.
Namely:
How would I check if an object implements a certain behavior? I have thought of the following methods
Check if the object has a particular method. But this would mean standardizing method names and if the project is big, it can quickly become cumbersome, and lead to the java problem (object.hasMethod('emailRegexValidatorSimpleSuperLongNotConflictingMethodName')...It would just move the problem of OOP, not fix it. Furthermore, I could not find info on the performance of looking up if methods exist
Store each composited object in an array and check if the object contains the compositor. Something like: object.hasComposite(compositorClass)...But that's also not really elegant and is once again OOP, just not in the standard way.
Have each object have an "implements" array property, and leave the responsibility to the object to say if it implements a certain behavior, whether it is through composition or natively. Flexible and simple, but requires to remember a number of conventions. It is my preferred method until now, but I am still looking.
How would I initialize an object without repeating all the set-up for composited objects? For example, if I have an "textInput" class that uses a certain number of validators, which have to be initialized with variables, and a class "emailInput" which uses the exact same validators, it is cumbersome to repeat the code. And if the interface of the validators change, the code has to change in every class that uses them. How would I go about setting that easily? The API I am thinking of should be as simple as doing object.compositors('emailValidator','lengthValidator','...')
Is there any performance loss associated with having most of the functions that run in the app go through an apply()? Since I am going to be using delegation extensively, basic objects will most probably have almost no methods. All methods will be provided by the composited objects.
Any good resource? I have read countless posts about OOP vs delegation, and about the benefits of delegation, etc, but I can't find anything that would discuss "javascript delegation done right", in the scope of a large framework.
edit
Further explanations:
I don't have code yet, I have been working on a framework in pure OOP and I am getting stuck and in need of multiple inheritance. Thus, I decided to drop classes totally. So I am now merely at theoretical level and trying to make sense out of this.
"Compositing" might be the wrong word; I am referring to the composite pattern, very useful for tree-like structures. It's true that it is rare to have tree structures on the front end (well, save for the DOM of course), but I am developing for node.js
What I mean by "switching from OOP" is that I am going to part from defining classes, using the "new" operator, and so on; I intend to use anonymous objects and extend them with delegators. Example:
var a = {};
compositor.addDelegates(a,["validator", "accessManager", "databaseObject"]);
So a "class" would be a function with predefined delegators:
function getInputObject(type, validator){
var input = {};
compositor.addDelegates(input,[compositor,renderable("input"+type),"ajaxed"]);
if(validator){input.addDelegate(validator);}
return input;
}
Does that make sense?
1) How would I check if an object implements a certain behavior?
Most people don't bother with testing for method existance like this.
If you want to test for methods in order to branch and do different things if its found or not then you are probably doing something evil (this kind of instanceof is usually a code smell in OO code)
If you are just checking if an object implements an interface for error checking then it is not much better then not testing and letting an exception be thrown if the method is not found. I don't know anyone that routinely does this checking but I am sure someone out there is doing it...
2) How would I initialize an object without repeating all the set-up for composited objects?
If you wrap the inner object construction code in a function or class then I think you can avoid most of the repetition and coupling.
3) Is there any performance loss associated with having most of the functions that run in the app go through an apply()?
In my experience, I prefer to avoid dealing with this unless strictly necessary. this is fiddly, breaks inside callbacks (that I use extensively for iteration and async stuff) and it is very easy to forget to set it correctly. I try to use more traditional approaches to composition. For example:
Having each owned object be completely independent, without needing to look at its siblings or owner. This allows me to just call its methods directly and letting it be its own this.
Giving the owned objects a reference to their owner in the form of a property or as a parameter passed to their methods. This allows the composition units to access the owner without depending on having the this correctly set.
Using mixins, flattening the separate composition units in a single level. This has big name clash issues but allows everyone to see each other and share the same "this". Mixins also decouples the code from changes in the composition structure, since different composition divisions will still flatten to the same mixed object.
4) Any good resources?
I don't know, so tell me if you find one :)

Categories

Resources