Making a custom group of defined chaining methods in js - javascript

The question is related to general js programming, but I'll use nightwatch.js as an example to elaborate my query.
NightWatch JS provides various chaining methods for its browser components, like: -
browser
.setValue('input[name='email']','example#mail.com')
.setValue('input[name='password']', '123456')
.click('#submitButton')
But if I'm writing method to select an option from dropdown, it requires multiple steps, and if there are multiple dropdowns in a form, it gets really confusing, like: -
browser
.click(`#country`)
.waitForElementVisible(`#india`)
.click(`#india`)
.click(`#state`)
.waitForElementVisible(`#delhi`)
.click(`#delhi`)
Is it possible to create a custom chaining method to group these already defined methods? For example something like:
/* custom method */
const dropdownSelector = (id, value) {
return this
.click(`${id}`).
.waitForElementVisible(`${value}`)
.click(`${value}`)
}
/* So it can be used as a chaining method */
browser
.dropdownSelector('country', 'india')
.dropdownSelector('state', 'delhi')
Or is there any other way I can solve my problem of increasing reusability and readability of my code?

I'm somewhat new to JS so couldn't tell you an ideal code solution, would have to admit I don't know what a proxy is in this context. But in the world of Nightwatch and test-automation i'd normally wrap multiple steps I plan on reusing into a page object. Create a new file in a pageObject folder and fill it with the method you want to reuse
So your test...
browser
.click(`#country`)
.waitForElementVisible(`#india`)
.click(`#india`)
.click(`#state`)
.waitForElementVisible(`#delhi`)
.click(`#delhi`)
becomes a page object method in another file called 'myObject' like...
selectLocation(browser, country, state, city) {
browser
.click(`#country`) <== assume this never changes?
.waitForElementVisible(country)
.click(country)
.click(state)
.waitForElementVisible(city)
.click(city);
}
and then each of your tests inherit the method and define those values themselves, however you chose to manage that...
const myObject = require ('<path to the new pageObject file>')
module.exports = {
'someTest': function (browser) {
const country = 'something'
const state = 'something'
const city = 'something'
myObject.selectLocation(browser);
You can also set your country / state / city as variables in a globals file and set them as same for everything but I don't know how granular you want to be.
Hope that made some sense :)

This is a great place to use a Proxy. Given some class:
function Apple ()
{
this.eat = function ()
{
console.log("I was eaten!");
return this;
}
this.nomnom = function ()
{
console.log("Nom nom!");
return this;
}
}
And a set of "extension methods":
const appleExtensions =
{
eatAndNomnom ()
{
this.eat().nomnom();
return this;
}
}
We can create function which returns a Proxy to select which properties are retrieved from the extension object and which are retrieved from the originating object:
function makeExtendedTarget(target, extensions)
{
return new Proxy(target,
{
get (obj, prop)
{
if (prop in extensions)
{
return extensions[prop];
}
return obj[prop];
}
});
}
And we can use it like so:
let apple = makeExtendedTarget(new Apple(), appleExtensions);
apple
.eatAndNomnom()
.eat();
// => "I was eaten!"
// "Nom nom!"
// "I was eaten!"
Of course, this requires you to call makeExtendedTarget whenever you want to create a new Apple. However, I would consider this a plus, as it makes it abundantly clear you are created an extended object, and to expect to be able to call methods not normally available on the class API.
Of course, whether or not you should be doing this is an entirely different discussion!

Related

Stuck converting ngResource angular service to Vanilla JS

We are migrating our site from old angularjs to using Vuejs.
Step one is to modify our services used throughout the site which all rely heavily on ngResource and convert them into vanilla js code that can be called by Vue.
The challenge is that in addition to making API calls using ngResource they also extending the returning object via prototype.
While using a the module pattern in regular javascript I can mimick the API behaviour of the ngResource service. But I am not clear how to set this up so that it can also support the prototype extensions that are being applied to the returning object (whether a single object or an array).
For example one of our current services might look like this
"use strict";
angular.module("myApp")
.factory("PortfolioService",
[
"$resource", "$rootScope", "$http",
function($resource,
$rootScope,
$http) {
var Portfolio = $resource("services/portfolios/:Uid",
{
'_': function() { return Date.now() }
}, {
'query': {
method: "GET",
url: "services/portfolios/",
transformResponse: $http.defaults.transformResponse.concat([
function (data) { return data.Data; }
])
}
});
Portfolio.prototype.getPicUrl= function() {
return this.ImgBasePath + this.ImgUrl;
};
return Portfolio;
}
]);
Note: that it mames a service call called query but also extends the returning object with a new function called getPicUrl.
I have created a JS equivalent that looks like this
const vPortfolioService = (() => {
var baseapipath = "http://localhost:8080/services/";
var Portfolio = {
query: function() {
return axios.get(baseapipath + "portfolios/");
}
};
Portfolio.prototype.getPicUrl= function () {
return this.ImgBasePath + this.ImgUrl;
}
return Portfolio;
})();
The service part works fine but I dont know how to do what ngResource seems to do which is to return a resource from the API which includes the prototype extensions.
Any advice would be appreciated.
Thanks
As I mentioned in my replies to #Igor Moraru, depending on how much of your code base you're replacing, and how much of that existing code base made use of the full capabilities of ngResource, this is not a trivial thing to do. But just focusing on the specific example in your question, you need to understand some vanilla JS a bit more first.
Why does the Portfolio object have a prototype property when it's returned from $resource(), but not when it's created via your object literal? Easy: The object returned by $resource() is a function, which in turn also means it's a class, and those have a prototype property automatically.
In JavaScript, regular functions and classes are the same thing. The only difference is intent. In this case, the function returned by $resource() is intended to be used as a class, and it's easy to replicate certain aspects of that class such as the static query method and the non-static (i.e., on the prototype) getPicUrl method:
const vPortfolioService = (() => {
var baseapipath = "http://localhost:8080/services/";
class Portfolio {
constructor(params) {
Object.assign(this, params);
}
static query() {
return axios.get(baseapipath + "portfolios/").then(res => {
// this convert the objects into an array of Portfolio instances
// you probably want to check if the response is valid before doing this...
return res.data.map(e => new this(e));
});
}
getPicUrl() {
return this.ImgBasePath + this.ImgUrl;
}
}
return Portfolio;
})();
But the problem is, this probably isn't enough. If you're migrating/refactoring an entire application, then you have to be certain of every instance in which your application uses ngResource, and based on your question, I'm fairly certain you've used it more than this class would allow.
For example, every class created by $resource also has static methods such as get, post, etc., as well as corresponding instance methods such as $get, $post, and so on. In addition, the constructor I've provided for the class is just a very lazy stop-gap to allow you to create an instance with arbitrary properties, such as the properties referenced by the getPicUrl method.
So I think you have three options:
Continue playing with the above class to fit something closer to what you need, and then edit every place in your application where your code relies on the Portfolio Service so that it now reflects this new, more limited class. This is probably your best option, especially if your application isn't that big and you don't have to worry about someone else's code not working
Analyze the source code for ngResource, rip it out, and modify it so it doesn't need AngularJS to work. Perhaps someone has already done this and made it available as a library? Kind of a long shot I'd guess, but it may work.
Keep AngularJS in your application, alongside Vue, but only use it to grab the essentials like $http and $resource. An example implementation is below, but this is probably the worst option. There's additional overhead by bootstrapping pieces of angular, and tbh it probably needs to bootstrap other pieces I haven't thought of... but it's neat I guess lol:
const vPortfolioService = (() => {
var inj = angular.injector(["ng", "ngResource"]);
var $http = inj.get("$http"), $resource = inj.get("$resource");
var Portfolio = $resource("services/portfolios/:Uid",
{
'_': function () { return Date.now() }
}, {
'query': {
method: "GET",
url: "services/portfolios/",
transformResponse: $http.defaults.transformResponse.concat([
function (data) { return data.Data; }
])
}
});
Portfolio.prototype.getPicUrl = function () {
return this.ImgBasePath + this.ImgUrl;
};
return Portfolio;
})();
Object instances does not exposes prototype property. Instead you can access it by using:
object.__proto__ // not recommended, or better
Object.getPrototypeOf(object)
Object.getPrototypeOf() return object's prototype object, which you can use to assign new properties.
Object.getPrototypeOf(Portfolio).getPicUrl= function () {
return this.ImgBasePath + this.ImgUrl;
}
Note: You still can, though, access the prototype of Function() by doing Function.prototype.
UPDATE: Your Portfolio should be a new object, created from the global javascript object, to avoid the issue that #user3781737 has mentioned.
var Portfolio = Object.create({
query: function() {
return axios.get(baseapipath + "portfolios/");
}
});

How to design a JS object that has private state and may be instantiated multiple times?

Just trying to wrap my head around prototype-based design
Problem: implement a data structure say priority-queue with a known API. Instantiate multiple instances of the PQ.
So I used the revealing module pattern as follows
module.exports = (function () {
// ... assume the following methods are revealed. Other private methods/fields are hidden
let priorityQueue = {
insert,
removeMax,
isEmpty,
toString
};
return {
priorityQueue,
newObj: (comparer, swapper) => {
let instance = Object.create(priorityQueue);
instance.array = [];
instance.size = 0;
instance.less = comparer;
instance.swap = swapper;
return instance;
}
}
})();
Created a newObj factory method to create valid instances. priorityQueue is the API/prototype.
So methods belong in the prototype.
Instance Fields cannot reside there ; they would be shared across instances.
However in this case, the internal fields of the PQ are not encapsulated.
const pQ = require('./priorityQueue').newObj(less, swap);
pQ.array = undefined; // NOOOOOOO!!!!
Update: To clarify my question, the methods in the prototype object need to operate on the instance fields array & size. However these fields cannot be shared across instances. How would the methods in the prototype close over instance fields in the object?
Don't assign array or whatever you want to encapsulate to new object.
module.exports = (function () {
// ... assume the following methods are revealed. Other private methods/fields are hidden
let priorityQueue = {
insert,
removeMax,
isEmpty,
toString
};
return {
priorityQueue,
newObj: function(comparer, swapper){
let array = [];
let instance = Object.create(priorityQueue);
instance.size = 0;
instance.less = comparer;
instance.swap = swapper;
return instance;
}
}
})();
the reason class syntax was implemented directly into js was just to remove the need to seek that answer. if you really want to go that deep, you should just read the book i mentioned below my answer.
to give you an example of intentional usage of closures to grant private data, i'm going to create a little code example just for this occasion.
keep in mind it's just an example of a concept and it's not feature complete at all. i encourage you just to see it as an example. you still have to manage instances because the garbage collector will not clean them up.
// this will be the "class"
const Thing = (function(){
// everything here will be module scope.
// only Thing itself and it's instances can access data in here.
const instances = [];
// private is a reserved word btw.
const priv = [];
// let's create some prototype stuffz for Thing.
const proto = {};
// this function will access something from the module scope.
// does not matter if it's a function or a lambda.
proto.instanceCount = _=> instances.length;
// you need to use functions if you want proper "this" references to the instance of something.
proto.foo = function foo() {return priv[instances.indexOf(this)].bar};
const Thing = function Thing(arg) {
// totally will cause a memory leak
// unless you clean up the contents through a deconstructor.
// since "priv" and "instances" are not accessible from the outside
// the following is similar to actual private scoping
instances.push(this);
priv.push({
bar: arg
});
};
// let's assign the prototype:
Thing.prototype = proto;
// now let us return the constructor.
return Thing;
})();
// now let us use this thing..
const x = new Thing('bla');
const y = new Thing('nom');
console.log(x.foo());
console.log(x.instanceCount());
console.log(y.foo());
there is a great book called "Pro Javascript Design Patterns" by Dustin Diaz and Ross Harmes. it's open free theese days: https://github.com/Apress/pro-javascript-design-patterns
it will in depth explain certain design patterns that aimed to solve exactly this answer long before we got classes etc. in javascript.
but honestly.. if you want to go further and add something like "extend" or calling functions of the super class.. dude srsly.. just use classes in js.
yes it's all possible in plain vanilla but you don't want to go through all the hassle of creating gluecode.

Calling object functions with variables

I'm building a simple node.js websocket server and I want to be able to send a request from a client to the server and have it just take care of things (nothing that could cause harm). Ideally the client will pass the server an object with 2 variables, one of them for the object and the other for the specific function in that object to call. Something like this:
var callObject = {
'obj': 'testObject',
'func':'testFunc'
}
var testObject = {
func: function(){
alert('it worked');
}
}
// I would expect to be able to call it with sometihng like.
console.log( window[callObject.obj] );
console.log( window[callObject.obj][callObject.func] );
I tried calling it with global (since node.js doesn't uses it instead of a browsers window) but it won't work, it always tells me that it can't find callObject.func of undefined. If I call a console.log on callObject.obj it shows the objects variable, as a string, as expected. If run a console.log on the object itself I get the object back.
I'm guessing this is something rather simple, but my Google-fu has failed me.
My recommendation is to resist that pattern and not have client code pick any function to call. If you are not careful you have built yourself a nice large security hole. Especially if you are considering using eval.
Instead have a more explicit mapping between data sent by the client and server code. (Similar to what routes in express what give you).
You might have something like this
const commands = { doSomething() { ... } );
// Then you should be able to say:
let clientCommand = 'doSomething'; // from client
commands[clientCommand](param);
This should be pretty close to what you want to achieve.
Just make sure doSomething validates any parameters passed in.
For two levels of indirection:
const commandMap = { room: { join() { ...} }, chat: { add() { ... } }};
// note this is ES6 syntax
let clientCmd = 'room';
let clientFn = 'join';
commandMap[clientCmd][clientFn]();
I think you might just have to find the right place to put the command map. Show your web socket handler code.

Is there a pattern in JavaScript for loosely coupled objects.

I'm relatively new to JavaScript so apologies if this type of question is an obvious one.
We have an app which uses etcd as its way to store data. What I'm trying to do is implement a way of swapping or alternating between different backend data stores (I'm wanting to use dynamodb).
I come from a C# background so if I was to implement this behaviour in an asp.net app I would use interfaces and dependancy injection.
The best solution I can think of is to have a factory which returns a data store object based upon some configuration setting. I know that TypeScript has interfaces but would prefer to stick to vanilla js if possible.
Any help would be appreciated. Thanks.
Interfaces are "merely" a static typing measure to implement polymorphism. Since Javascript doesn't have any static type system, it also doesn't have interfaces. But, it's a highly polymorphic language in itself. So what you want to do is trivial; simply don't write any interfaces as part of the process:
function StorageBackend1() { }
StorageBackend1.prototype.store = function (data) {
// here be dragons
};
function StorageBackend2() { }
StorageBackend2.prototype.store = function (data) {
// here be other dragons
};
function SomeModel(storage) {
this.storage = storage;
this.data = {};
}
SomeModel.prototype.saveData = function () {
this.storage.store(this.data);
};
var m1 = new SomeModel(new StorageBackend1),
m2 = new SomeModel(new StorageBackend2);
m1.saveData();
m2.saveData();
Using TypeScript and actual interfaces gives you the sanity of a statically type checked language with fewer possible surprises at runtime, but you don't need it for polymorphism.
I come from Delphi / C# etc. And interface are just a pain in the but.
Javascript is so much nicer..
With javascript interfaces are not needed, just add the method.
eg.
function MyBackend1() {
this.ver = 'myBackEnd1';
}
function MyBackend2() {
this.ver = 'myBackEnd2';
this.somefunc = function () { console.log('something'); }
}
function run(backend) {
console.log(backend.ver);
//below is like an interface supports
if (backend.somefunc) backend.somefunc();
}
run(new MyBackend2());
//lets now use backend1
run(new MyBackend1());

JavaScript: Is the nesting of constructor instances inside a constructed 'wrapper' problematic?

Hopefully this question won't be flagged as too subjective but I'm newish to OOP and struggling a bit when it come to sharing data between parts of my code that I think should be separated to some extent.
I'm building a (non-geo) map thing (using leaflet.js which is superduper) which has a map (duh) and a sidebar that basically contains a UI (toggling markers both individually and en masse, searching said marker toggles as well as other standard UI behaviour). Slightly confused about organisation too (how modular is too modular but I can stumble through that myself I guess). I am using a simple JSON file for my settings for the time being.
I started with static methods stored in objects which is essentially unusable or rather un-reusable so I went for nested constructors (kinda) so I could pass the parent scope around for easier access to my settings and states properties:
function MainThing(settings) {
this.settings = options;
this.states = {};
}
function SubthingMaker(parent) {
this.parent = parent;
}
SubthingMaker.prototype.method = function() {
var data = this.parent.settings.optionOne;
console.log(data);
this.parent.states.isVisible = true;
};
MainThing.prototype.init = function() {
this.subthing = new SubthingMaker(this);
// and some other fun stuff
};
And then I could just create and instance of MainThing and run MainThing.init() and it should all work lovely. Like so:
var options = {
"optionOne": "Hello",
"optionTwo": "Goodbye"
}
var test = new MainThing(options);
test.init();
test.subthing.method();
Should I really be nesting in this manner or will it cause me problems in some way? If this is indeed okay, should I keep going deeper if needed (maybe the search part of my ui wants its own section, maybe the map controls should be separate from DOM manipulation, I dunno) or should I stay at this depth? Should I just have separate constructors and store them in an object when I create an instance of them? Will that make it difficult to share/reference data stored elsewhere?
As regards my data storage, is this an okay way to handle it or should I be creating a controller for my data and sending requests and submissions to it when necessary, even if that data is then tucked away in simple JSON format? this.parent does really start to get annoying after a while, I suppose I should really be binding if I want to change my scope but it just doesn't seem to be an elegant way to access the overall state data of the application especially since the UI needs to check the state for almost everything it does.
Hope you can help and I hope I don't come across as a complete idiot, thanks!
P.S. I think the code I posted works but if it doesn't, its the general idea I was hoping to capture not this specific example. I created a much simpler version of my actual code because I don't want incur the wrath of the SO gods with my first post. (Yes, I did just use a postscript.)
An object may contain as many other objects as are appropriate for doing it's job. For example, an object may contain an Array as part of its instance data. Or, it may contain some other custom object. This is normal and common.
You can create/initialize these other objects that are part of your instance data in either your constructor or in some other method such as a .init() method whichever is more appropriate for your usage and design.
For example, you might have a Queue object:
function Queue() {
this.q = [];
}
Queue.prototype.add = function(item) {
this.q.push(item);
return this;
}
Queue.prototype.next = function() {
return this.q.shift();
}
var q = new Queue();
q.add(1);
q.add(2);
console.log(q.next()); // 1
This creates an Array object as part of its constructor and then uses that Array object in the performance of its function. There is no difference here whether this creates a built-in Array object or it calls new on some custom constructor. It's just another Javascript object that is being used by the host object to perform its function. This is normal and common.
One note is that what you are doing with your MainThing and SubthingMaker violates OOP principles, because they are too tightly coupled and have too wide access to each other internals:
SubthingMaker.prototype.method = function() {
// it reads something from parent's settings
var data = this.parent.settings.optionOne;
console.log(data);
// it changes parent state directly
this.parent.states.isVisible = true;
};
While better idea could be to make them less dependent.
It is probably OK for the MainThing to have several "subthings" as your main thing looks like a top-level object which will coordinate smaller things.
But it would be better to isolate these smaller things, ideally they should work even there is no MainThing or if you have some different main thing:
function SubthingMaker(options) {
// no 'parent' here, it just receives own options
this.options = options;
}
SubthingMaker.prototype.method = function() {
// use own options, instead of reading then through the MainThing
var data = this.options.optionOne;
console.log(data);
// return the data from the method instead of
// directly modifying something in MainThing
return true;
this.parent.states.isVisible = true;
};
MainThing.prototype.doSomething = function() {
// MainThing calls the subthing and modifies own data
this.parent.states.isVisible = this.subthing.method();
// and some other fun stuff
};
Also to avoid confusion, it is better not to use parent / child terms in this case. What you have here is aggregation or composition of objects, while parent / child are usually used to describe the inheritance.

Categories

Resources