Backbone.Wreqr vs Javascript Object - javascript

What are the main benefits backbone.wreqr has over a js object, both cases having access to marionette's Event aggregator.
Wouldn't assigning/calling methods from an object work the same way as Commands / RequestResponse. To me i see no need to implement this other than giving semantic/readability a +1.
https://github.com/marionettejs/backbone.wreqr
Can someone please enlighten me, this is my first backbone (and modular) application.

The benefits are:
event and command handling is optional and you don't need to check manually yourself for undefineds
optionally multiple handlers for each event
lazy execution of commands (fire event first, register command later and it will immediately be executed)
you can define the scope of execution w/o using any additional methods like $.proxy, ...

It provides implementations of several common messaging patterns, including the Event Aggregator Pattern, Command Pattern, and Observer Pattern.
These patterns facilitate decoupling of implementations to reduce object dependencies. Consider a simple "Combat" style game consisting of a tank and several targets. Without messaging patterns, the tank needs to have explicit knowledge about the targets and how they work, and in fact cannot exist without the target definition:
var Tank = function(targets) { this.targets = targets };
Tank.prototype.fire = function() {
var self = this,
HpLoss = -500;
_.each(this.targets, function(target) {
if (self.isNear(target.coordinates) && target.canWithstand(HpLoss)) {
target.die();
}
}
var target1 = new Target(coordinatesA, armorA);
var target2 = new Target(coordinatesB, armorB);
var tank = new Tank([target1, target2]);
Using messaging patterns such as Observer, tank in the code above doesn't need knowledge of its targets; rather, the targets can determine for themselves whether they should die:
var Target = function() {}
Target.prototype.calculateDamage = function(coordinates, damage) {
if (this.isNear(coordinates) && !this.canWithstand(damage)) {
this.die();
}
}
var Tank = function() {};
Tank.prototype.fire = function() {
this.trigger('fire', { damage: 400, coordinates: this.location });
};
// Now Tank is entirely self-contained, and some external mediator can
// make things happen at will:
function main() {
var target1 = new Target(coordinatesA, armorA);
var target2 = new Target(coordinatesB, armorB);
var tank = new Tank();
target1.listenTo(tank, 'fire', target1.calculateDamage, target1);
target2.listenTo(tank, 'fire', target2.calculateDamage, target2);
tank.fire();
var target3 = new Target3(coordinatesB, armorB);
target3.listenTo(tank, 'fire', target3.calculateDamage, target3);
}

Related

Node Function Scope

I jumped into the deep end recently and have been slowly learning to swim. I'm working on a CLI for building out a simple text game world. That code is becoming a convoluted mess and so I have tried to recreate the error I am getting in a simpler form below.
Try as I might I can't seem to understand the best way to structure all of my functions. In my project I have a parser function that breaks input up and searches for a 'verb' to invoke via a try/catch block. When a verb i.e. 'look' runs it accesses my database module and sends a query based on several parameters to return the description of a room or thing. Because this is all asynchronous virtually everything is wrapped in a promise but I am leaving that out of this example. The following is not the actual project, just a simple recreation of the way I have my objects set up.
APP:
// ***********************
const player = require('./scope_test_player');
player.look();
player.water();
Module1:
// ***********************
const apple_tree = require('./scope_test_apple_tree');
module.exports = {
look: function(){
console.log(
'The apple tree is '+apple_tree.height+'ft tall and has '
+apple_tree.apples+' apples growing on it'
);
},
water: function() {
apple_tree.grow();
}
};
Module2:
// ***********************
const player = require('./scope_test_player');
module.exports = {
height: 10,
nutrition: 0.3,
apples: [],
fertilize: function(number) {
this.nutrition+=number;
},
grow: function() {
this.height+=this.nutrition;
}
};
In the above code I get 'TypeError: apple_tree.grow is not a function' from water or undefined from look. This is the bane of my existence and I have been getting this seemingly at random in my main project which leads me to believe I dont understand scope. I know I can require the module within the function and it will work, but that is hideous and would add hundreds of lines of code by the end. How do I cleanly access the functions of objects from within other objects?
You problem is that have a cyclic dependencies in your project and that you overwrite the exports property of the module. Because of that and the way node cachges required modules, you will get the original module.exports object in scope_test_player file and not the one you have overwritten. To solve that you need to write it that way:
// ***********************
const apple_tree = require('./scope_test_apple_tree');
module.exports.look = function() {
console.log(
'The apple tree is ' + apple_tree.height + 'ft tall and has ' + apple_tree.apples + ' apples growing on it'
);
};
module.exports.water = function() {
apple_tree.grow();
};
And
// ***********************
const player = require('./scope_test_player');
module.exports.height = 10;
module.exports.nutrition = 10;
module.exports.apples = [];
module.exports.fertilize = function(number) {
this.nutrition = +number;
};
module.exports.growth = function() {
this.height = +this.nutrition;
}
But this is a really bad design in gerenal and you should find another way how to solve that. You should always avoid loops/circles in your dependency tree.
UPDATE
In node each file is wrappted into load function in this way:
function moduleLoaderFunction( module, exports /* some other paramteres that are not relavant here*/)
{
// the original code of your file
}
node.js internally does something like this for a require:
var loadedModules = {}
function require(moduleOrFile) {
var resolvedPath = getResolvedPath(moduleOrFile)
if( !loadedModules[resolvedPath] ) {
// if the file was not loaded already create and antry in the loaded modules object
loadedModules[resolvedPath] = {
exports : {}
}
// call the laoded function with the initial values
moduleLoaderFunction(loadedModules[resolvedPath], loadedModules[resolvedPath].exports)
}
return loadedModules[resolvedPath].exports
}
Because of the cyclic require, the require function will return the original cache[resolvedPath].exports, the one that was initially set before you assinged your own object to it.
Is Module1 = scope_test_player and Module2 = scope_test_apple_tree?
Maybe you have a cyclic reference here?
APP requires scope_test_player
// loop
scope_test_player requires scope_test_apple_tree
scope_test_apple_tree requires scope_test_player
// loop
As I can see scope_test_apple_tree doesn't use player.
Can you try to remove:
const player = require('./scope_test_player');
from Module2 ?
There are a few issues to address.
Remove the player require in Module 2(scope_test_apple_tree.js):
const player = require('./scope_test_player')
It doesn't do any damage keeping it there but it's just unnecessary.
Also, replace =+ with += in fertilize and grow which is what I think you are going for.
I was able to run the code natually with those fixes.
If you want to refactor, I'd probably flatten out the require files and do it in the main file controlling the player actions and explicitly name the functions with what is needed to run it (in this case...the tree).
Keeping mostly your coding conventions, my slight refactor would look something like:
index.js
const player = require('./scope_test_player');
const apple_tree = require('./scope_test_apple_tree');
player.lookAtTree(apple_tree);
player.waterTree(apple_tree);
scope_test_player.js
module.exports = {
lookAtTree: function(tree){
console.log(
'The apple tree is '+tree.height+'ft tall and has '
+tree.apples.length+' apples growing on it'
);
},
waterTree: function(tree) {
tree.grow();
console.log('The apple tree grew to', tree.height, 'in height')
}
};
scope_test_apple_tree.js
module.exports = {
height: 10,
nutrition: 0.3,
apples: [],
fertilize: function(number) {
this.nutrition += number;
},
grow: function() {
this.height += this.nutrition;
}
};
Yes, I had circular dependencies in my code because I was unaware of the danger they imposed. When I removed them from the main project sure enough it started working again. It now seems that I'm going to be forced into redesigning the project as having two modules randomly referencing each other is going to cause more problems.

Custom browser actions in Protractor

The problem:
In one of our tests we have a "long click"/"click and hold" functionality that we solve by using:
browser.actions().mouseDown(element).perform();
browser.sleep(5000);
browser.actions().mouseUp(element).perform();
Which we would like to ideally solve in one line by having sleep() a part of the action chain:
browser.actions().mouseDown(element).sleep(5000).mouseUp(element).perform();
Clearly, this would not work since there is no "sleep" action.
Another practical example could be the "human-like typing". For instance:
browser.actions().mouseMove(element).click()
.sendKeys("t").sleep(50) // we should randomize the delays, strictly speaking
.sendKeys("e").sleep(10)
.sendKeys("s").sleep(20)
.sendKeys("t")
.perform();
Note that these are just examples, the question is meant to be generic.
The Question:
Is it possible to extend browser.actions() action sequences and introduce custom actions?
Yes, you can extend the actions framework. But, strictly speaking, getting something like:
browser.actions().mouseDown(element).sleep(5000).mouseUp(element).perform();
means messing with Selenium's guts. So, YMMV.
Note that the Protractor documentation refers to webdriver.WebDriver.prototype.actions when explaining actions, which I take to mean that it does not modify or add to what Selenium provides.
The class of object returned by webdriver.WebDriver.prototype.actions is webdriver.ActionSequence. The method that actually causes the sequence to do anything is webdriver.ActionSequence.prototype.perform. In the default implementation, this function takes the commands that were recorded when you called .sendKeys() or .mouseDown() and has the driver to which the ActionSequence is associated schedule them in order. So adding a .sleep method CANNOT be done this way:
webdriver.ActionSequence.prototype.sleep = function (delay) {
var driver = this.driver_;
driver.sleep(delay);
return this;
};
Otherwise, the sleep would happen out of order. What you have to do is record the effect you want so that it is executed later.
Now, the other thing to consider is that the default .perform() only expects to execute webdriver.Command, which are commands to be sent to the browser. Sleeping is not one such command. So .perform() has to be modified to handle what we are going to record with .sleep(). In the code below I've opted to have .sleep() record a function and modified .perform() to handle functions in addition to webdriver.Command.
Here is what the whole thing looks like, once put together. I've first given an example using stock Selenium and then added the patches and an example using the modified code.
var webdriver = require('selenium-webdriver');
var By = webdriver.By;
var until = webdriver.until;
var chrome = require('selenium-webdriver/chrome');
// Do it using what Selenium inherently provides.
var browser = new chrome.Driver();
browser.get("http://www.google.com");
browser.findElement(By.name("q")).click();
browser.actions().sendKeys("foo").perform();
browser.sleep(2000);
browser.actions().sendKeys("bar").perform();
browser.sleep(2000);
// Do it with an extended ActionSequence.
webdriver.ActionSequence.prototype.sleep = function (delay) {
var driver = this.driver_;
// This just records the action in an array. this.schedule_ is part of
// the "stock" code.
this.schedule_("sleep", function () { driver.sleep(delay); });
return this;
};
webdriver.ActionSequence.prototype.perform = function () {
var actions = this.actions_.slice();
var driver = this.driver_;
return driver.controlFlow().execute(function() {
actions.forEach(function(action) {
var command = action.command;
// This is a new test to distinguish functions, which
// require handling one way and the usual commands which
// require a different handling.
if (typeof command === "function")
// This puts the command in its proper place within
// the control flow that was created above
// (driver.controlFlow()).
driver.flow_.execute(command);
else
driver.schedule(command, action.description);
});
}, 'ActionSequence.perform');
};
browser.get("http://www.google.com");
browser.findElement(By.name("q")).click();
browser.actions().sendKeys("foo")
.sleep(2000)
.sendKeys("bar")
.sleep(2000)
.perform();
browser.quit();
In my implementation of .perform() I've replaced the goog... functions that Selenium's code uses with stock JavaScript.
Here is what I did (based on the perfect #Louis's answer).
Put the following into onPrepare() in the protractor config:
// extending action sequences
protractor.ActionSequence.prototype.sleep = function (delay) {
var driver = this.driver_;
this.schedule_("sleep", function () { driver.sleep(delay); });
return this;
};
protractor.ActionSequence.prototype.perform = function () {
var actions = this.actions_.slice();
var driver = this.driver_;
return driver.controlFlow().execute(function() {
actions.forEach(function(action) {
var command = action.command;
if (typeof command === "function")
driver.flow_.execute(command);
else
driver.schedule(command, action.description);
});
}, 'ActionSequence.perform');
};
protractor.ActionSequence.prototype.clickAndHold = function (elm) {
return this.mouseDown(elm).sleep(3000).mouseUp(elm);
};
Now you'll have sleep() and clickAndHold() browser actions available. Example usage:
browser.actions().clickAndHold(element).perform();
I think it is possible to extend the browser.actions() function but that is currently above my skill level so I'll lay out the route that I would take to solve this issue. I would recommend setting up a "HelperFunctions.js" Page Object that will contain all of these Global Helper Functions. In that file you can list your browser functions and reference it in multiple tests with all of the code in one location.
This is the code for the "HelperFunctions.js" file that I would recommend setting up:
var HelperFunctions = function() {
this.longClick = function(targetElement) {
browser.actions().mouseDown(targetElement).perform();
browser.sleep(5000);
browser.actions().mouseUp(targetElement).perform();
};
};
module.exports = new HelperFunctions();
Then in your Test you can reference the Helper file like this:
var HelperFunctions = require('../File_Path_To/HelperFunctions.js');
describe('Example Test', function() {
beforeEach(function() {
this.helperFunctions = HelperFunctions;
browser.get('http://www.example.com/');
});
it('Should test something.', function() {
var Element = element(by.className('targetedClassName'));
this.helperFunctions.longClick(Element);
});
});
In my Test Suite I have a few Helper files setup and they are referenced through out all of my Tests.
I have very little knowledge of selenium or protractor, but I'll give it a shot.
This assumes that
browser.actions().mouseDown(element).mouseUp(element).perform();
is valid syntax for your issue, if so then this would likely do the trick
browser.action().sleep = function(){
browser.sleep.apply(this, arguments);
return browser.action()
}

javascript mediator vs observer

First I want to say I have googled javascript mediator vs observer and read almost ten links.
Also I search in statckoverflow and I got this Mediator Vs Observer Object-Oriented Design Patterns and
mediator-vs-observer.
However I still do not have clear understand about the difference between them.
So I wonder if someone can explain them more clearly?
Maybe a live example. :)
Thanks.
I tried to create an example, is this a pattern of mediator?
code:
var EventMediator = {
publish: function (target, message) {
var args = Array.prototype.slice.call(arguments, 2);
var msgs = target.messages || [];
for (var i = 0; i < msgs.length; i++) {
var msg = msgs[i];
msg.callback.apply(msg.context, args);
}
},
register: function (target, message, fn) {
target.messages = target.messages || [];
target.messages.push({
context: target,
callback: fn
});
}
};
var t1 = {name: 'kk'};
var t2 = {name: 'gg'};
EventMediator.register(t1, "nameChanged", function () {
console.info("t1 name chagned");
});
EventMediator.publish(t1, "nameChanged");
Here I want to know if the Mediator should know about the exist of the object who trigger the message?
Observer pattern: the observed object manages its own list of observers (aka listeners) which must be notified when a certain event happens.
Mediator pattern: the observed object is not aware of the list of its observers, there is an external entity that makes the mapping between observed objects and observers.

Javascript closures and memory leak risks

Recetly I was looking for memory leaks in my javascript code. Ater finding some major leaks I starter to look for minor and found something that could be a potential leak - the "hoverIntent.js" plugin. I would like to ask if this is really a leak or am I a bit too overzealous?
General schema fo the code (full code here http://cherne.net/brian/resources/jquery.hoverIntent.js):
(function($) {
$.fn.hoverIntent = function(f,g) {
//...
var track = function(ev) {
cX = ev.pageX;
cY = ev.pageY;
};
var compare = function(ev,ob) {
//... function body
};
var delay = function(ev,ob) {
//... function body
};
var handleHover = function(e) {
//... function body
};
return this.bind('mouseenter',handleHover).bind('mouseleave',handleHover);
};
})(jQuery);
I know that many js plugins are written that way, but... If I get that correctly every time I invoke hoverIntent on my object, 3 new functuions (closures) are created? Isn't it a possible memory leak (or at least a performace issue)?
Wouldn't it be better to write is this way:
(function($) {
//create the methods only once on module init?
var track = function(ev) {
cX = ev.pageX;
cY = ev.pageY;
};
var compare = function(ev,ob) {
//... function body
};
var delay = function(ev,ob) {
//... function body
};
var handleHover = function(e) {
//... function body
};
$.fn.hoverIntent = function(f,g) {
//no closures here
return this.bind('mouseenter',handleHover).bind('mouseleave',handleHover);
};
})(jQuery);
You are correct, your second example would use less memory because of less closure functions. But as soon as you event isn't callable (element removed etc.) they would disappear again so it is not a "leak" as the memory isn't lost forever.
Also many plugins use the closure by setting the current state of an element in a variable instead of the element itself.

How should I implement OOP patterns to interactive web applications (with the aide of jQuery)?

Sometimes, using jQuery induces you to abuse its power (at least for me because of its selector matching capability). Event handlers here and there. Utility functions here and everywhere. Code coherence can almost seem nonexistent. I want to alleviate that problem by implementing OOP patterns, but since I have C++ and python background, implementing it in javascript is weirding me out a little bit.
The code below uses OOP patterns, but I'm not entirely sure if my implementations are good practices. The reason I'm doubting my implementations is because of the 3rd comment in my last stackoverflow question. I know it's only one certain detail in my code he commented on, but it also makes me wonder about the other patterns I'm implementing in my code.
I would really appreciate if you could point out the flaws and pitfalls in my patterns and/or if you have any suggestions. Many thanks in advance.
(this code is an simplification of something I'm developing, but the idea is similar)
Live Example
$(function(){
var stream = new Stream();
});
/* Stream Class
------------------------------------------*/
function Stream(){
// Disables multiple Stream objects
if (this.singleton)
return
else
this.__proto__.singleton = true;
this.elements = jQueryMapping(this.selectors) // Converts a map of selectors to a map of jQuery objects
this.initEvents();
}
Stream.prototype.singleton = false;
Stream.prototype.selectors = {
stream : '#stream',
post_content : '#post-content',
add_post: '#add-post',
// ... more action selectors
}
Stream.prototype.initEvents = function(){
this.elements.add_post.click(this, this.addPost);
// ... more action event-listeners
}
Stream.prototype.addPost = function(e){
var self = e.data;
var post_content = self.elements.post_content.val();
if (post_content)
self.elements.stream.append(new Post(post_content));
}
/* Post Class
------------------------------------------*/
function Post(post_content){
this.$element = $('<li>')
.text(post_content)
.append('<button class="delete-post">Delete</button>');
this.elements = jQueryMapping(this.selectors, this.$element);
this.initEvents();
return this.$element;
}
Post.prototype.selectors = {
delete_post: 'button.delete-post',
// ... more action selectors
}
Post.prototype.initEvents = function(){
this.elements.delete_post.click(this.deletePost);
// ... more action event-listeners
}
Post.prototype.deletePost = function(){
$(this).parent().slideUp();
}
/* Utils
------------------------------------------*/
function jQueryMapping(map, context){
// Converts a map of selectors to a map of jQuery objects
var $map = {};
$.each(map, function(key, value){
$map[key] = (context) ? $(value, context) : $(value);
});
return $map;
}
I believe your code is over engineered. I've re factored and it simplified it as can be seen here. If you really want a heavy OOP setup I recommend you use a clientside MVC (Backbone, knockout, etc) construct to do it properly or keep it light instead.
I'll proceed with general feedback on your code.
/* Stream Class
------------------------------------------*/
function Stream(){
// Disables multiple Stream objects
if (this.singleton)
return
else
this.__proto__.singleton = true;
this.elements = jQueryMapping(this.selectors) // Converts a map of selectors to a map of jQuery objects
this.initEvents();
}
There is no reason to use a singleton like this. It's also very bad to use .__proto__
I would recommend pattern like this instead.
var Stream = (function() {
var Stream = function() { ... };
// prototype stuff
var stream = new Stream();
return function() {
return stream;
};
})());
Storing a hash of data like that on the prototype is unneccesary.
Stream.prototype.selectors = {
stream : '#stream',
post_content : '#post-content',
add_post: '#add-post',
// ... more action selectors
}
You can include this as a defaults hash instead.
(function() {
var defaults = {
stream : '#stream',
post_content : '#post-content',
add_post: '#add-post',
// ... more action selectors
}
function Stream() {
...
this.elements = jQueryMapping(defaults);
}
}());
Your utility function could be optimised slightly.
$map[key] = (context) ? $(value, context) : $(value);
This could be rewritten as
$map[key] = $(value, context)
Since if context is undefined you just pass in an undefined paramater which is the same as passing in no parameter.
The title of this reads "for beginners", but I've found this section on design patterns, and this section on design patterns using jQuery useful.

Categories

Resources