Should I pass an object property into an object method? - javascript

I'm trying to learn object-oriented javascript. Working with a simple method I want to do this:
var users = function(url){
this.url = url;
this.log = function(){
console.log(this.url);
}
}
var apiPoint = "https://www.zenconomy.se/api/admin/tracking?format=json"
var liveUsers = new users(apiPoint)
liveUsers.log()
However, I've learned that it's often a good idea to pass variables into functions when working with normal functions, in objects however, this seems a bit clunky.
var users = function(url){
this.url = url;
this.log = function(url){
console.log(url);
}
}
var apiPoint = "here is my url"
var liveUsers = new users(apiPoint)
liveUsers.log(liveUsers.url)
Both methods work. What are the pros and cons of the different approaches, assuming that users.log only ever need properties from inside the users-class.

you just mentioned you are trying to learn OOP in javascript, but actually, consider the log function in your user object, if there is no users instance, no log method eigther. That's not the same concept according to OO in C++ or C#. In my opinion, prototype will best describe the oop, do as following:
var users = function(url){
this.url = url;
}
users.prototype.log = function(){
console.log(this.url);
}
in this way, log will not be in any instance of users, it exists in __proto__ which is a reference of prototype in any instance. That means when you create instances, they share all the functions, same as C++ or C#. finally, you should never use the second sample in your post, that's not OO things.

If you want log to always print the object's URL, then of course you would not pass in the object's URL as a parameter, since log can get it itself.
If you want to log various properties of the object, I'd suggest making separate routines such as logUrl and logBlah for the individual cases.
If you want log to print some arbitrary value, then it goes without saying that you need to pass in the value.
If there is nothing about logging that relates to the object, then you can just have a logging routine independent of the object that logs whatever you pass it.

Related

How to design a JS object that has private state and may be instantiated multiple times?

Just trying to wrap my head around prototype-based design
Problem: implement a data structure say priority-queue with a known API. Instantiate multiple instances of the PQ.
So I used the revealing module pattern as follows
module.exports = (function () {
// ... assume the following methods are revealed. Other private methods/fields are hidden
let priorityQueue = {
insert,
removeMax,
isEmpty,
toString
};
return {
priorityQueue,
newObj: (comparer, swapper) => {
let instance = Object.create(priorityQueue);
instance.array = [];
instance.size = 0;
instance.less = comparer;
instance.swap = swapper;
return instance;
}
}
})();
Created a newObj factory method to create valid instances. priorityQueue is the API/prototype.
So methods belong in the prototype.
Instance Fields cannot reside there ; they would be shared across instances.
However in this case, the internal fields of the PQ are not encapsulated.
const pQ = require('./priorityQueue').newObj(less, swap);
pQ.array = undefined; // NOOOOOOO!!!!
Update: To clarify my question, the methods in the prototype object need to operate on the instance fields array & size. However these fields cannot be shared across instances. How would the methods in the prototype close over instance fields in the object?
Don't assign array or whatever you want to encapsulate to new object.
module.exports = (function () {
// ... assume the following methods are revealed. Other private methods/fields are hidden
let priorityQueue = {
insert,
removeMax,
isEmpty,
toString
};
return {
priorityQueue,
newObj: function(comparer, swapper){
let array = [];
let instance = Object.create(priorityQueue);
instance.size = 0;
instance.less = comparer;
instance.swap = swapper;
return instance;
}
}
})();
the reason class syntax was implemented directly into js was just to remove the need to seek that answer. if you really want to go that deep, you should just read the book i mentioned below my answer.
to give you an example of intentional usage of closures to grant private data, i'm going to create a little code example just for this occasion.
keep in mind it's just an example of a concept and it's not feature complete at all. i encourage you just to see it as an example. you still have to manage instances because the garbage collector will not clean them up.
// this will be the "class"
const Thing = (function(){
// everything here will be module scope.
// only Thing itself and it's instances can access data in here.
const instances = [];
// private is a reserved word btw.
const priv = [];
// let's create some prototype stuffz for Thing.
const proto = {};
// this function will access something from the module scope.
// does not matter if it's a function or a lambda.
proto.instanceCount = _=> instances.length;
// you need to use functions if you want proper "this" references to the instance of something.
proto.foo = function foo() {return priv[instances.indexOf(this)].bar};
const Thing = function Thing(arg) {
// totally will cause a memory leak
// unless you clean up the contents through a deconstructor.
// since "priv" and "instances" are not accessible from the outside
// the following is similar to actual private scoping
instances.push(this);
priv.push({
bar: arg
});
};
// let's assign the prototype:
Thing.prototype = proto;
// now let us return the constructor.
return Thing;
})();
// now let us use this thing..
const x = new Thing('bla');
const y = new Thing('nom');
console.log(x.foo());
console.log(x.instanceCount());
console.log(y.foo());
there is a great book called "Pro Javascript Design Patterns" by Dustin Diaz and Ross Harmes. it's open free theese days: https://github.com/Apress/pro-javascript-design-patterns
it will in depth explain certain design patterns that aimed to solve exactly this answer long before we got classes etc. in javascript.
but honestly.. if you want to go further and add something like "extend" or calling functions of the super class.. dude srsly.. just use classes in js.
yes it's all possible in plain vanilla but you don't want to go through all the hassle of creating gluecode.

JavaScript: Is the nesting of constructor instances inside a constructed 'wrapper' problematic?

Hopefully this question won't be flagged as too subjective but I'm newish to OOP and struggling a bit when it come to sharing data between parts of my code that I think should be separated to some extent.
I'm building a (non-geo) map thing (using leaflet.js which is superduper) which has a map (duh) and a sidebar that basically contains a UI (toggling markers both individually and en masse, searching said marker toggles as well as other standard UI behaviour). Slightly confused about organisation too (how modular is too modular but I can stumble through that myself I guess). I am using a simple JSON file for my settings for the time being.
I started with static methods stored in objects which is essentially unusable or rather un-reusable so I went for nested constructors (kinda) so I could pass the parent scope around for easier access to my settings and states properties:
function MainThing(settings) {
this.settings = options;
this.states = {};
}
function SubthingMaker(parent) {
this.parent = parent;
}
SubthingMaker.prototype.method = function() {
var data = this.parent.settings.optionOne;
console.log(data);
this.parent.states.isVisible = true;
};
MainThing.prototype.init = function() {
this.subthing = new SubthingMaker(this);
// and some other fun stuff
};
And then I could just create and instance of MainThing and run MainThing.init() and it should all work lovely. Like so:
var options = {
"optionOne": "Hello",
"optionTwo": "Goodbye"
}
var test = new MainThing(options);
test.init();
test.subthing.method();
Should I really be nesting in this manner or will it cause me problems in some way? If this is indeed okay, should I keep going deeper if needed (maybe the search part of my ui wants its own section, maybe the map controls should be separate from DOM manipulation, I dunno) or should I stay at this depth? Should I just have separate constructors and store them in an object when I create an instance of them? Will that make it difficult to share/reference data stored elsewhere?
As regards my data storage, is this an okay way to handle it or should I be creating a controller for my data and sending requests and submissions to it when necessary, even if that data is then tucked away in simple JSON format? this.parent does really start to get annoying after a while, I suppose I should really be binding if I want to change my scope but it just doesn't seem to be an elegant way to access the overall state data of the application especially since the UI needs to check the state for almost everything it does.
Hope you can help and I hope I don't come across as a complete idiot, thanks!
P.S. I think the code I posted works but if it doesn't, its the general idea I was hoping to capture not this specific example. I created a much simpler version of my actual code because I don't want incur the wrath of the SO gods with my first post. (Yes, I did just use a postscript.)
An object may contain as many other objects as are appropriate for doing it's job. For example, an object may contain an Array as part of its instance data. Or, it may contain some other custom object. This is normal and common.
You can create/initialize these other objects that are part of your instance data in either your constructor or in some other method such as a .init() method whichever is more appropriate for your usage and design.
For example, you might have a Queue object:
function Queue() {
this.q = [];
}
Queue.prototype.add = function(item) {
this.q.push(item);
return this;
}
Queue.prototype.next = function() {
return this.q.shift();
}
var q = new Queue();
q.add(1);
q.add(2);
console.log(q.next()); // 1
This creates an Array object as part of its constructor and then uses that Array object in the performance of its function. There is no difference here whether this creates a built-in Array object or it calls new on some custom constructor. It's just another Javascript object that is being used by the host object to perform its function. This is normal and common.
One note is that what you are doing with your MainThing and SubthingMaker violates OOP principles, because they are too tightly coupled and have too wide access to each other internals:
SubthingMaker.prototype.method = function() {
// it reads something from parent's settings
var data = this.parent.settings.optionOne;
console.log(data);
// it changes parent state directly
this.parent.states.isVisible = true;
};
While better idea could be to make them less dependent.
It is probably OK for the MainThing to have several "subthings" as your main thing looks like a top-level object which will coordinate smaller things.
But it would be better to isolate these smaller things, ideally they should work even there is no MainThing or if you have some different main thing:
function SubthingMaker(options) {
// no 'parent' here, it just receives own options
this.options = options;
}
SubthingMaker.prototype.method = function() {
// use own options, instead of reading then through the MainThing
var data = this.options.optionOne;
console.log(data);
// return the data from the method instead of
// directly modifying something in MainThing
return true;
this.parent.states.isVisible = true;
};
MainThing.prototype.doSomething = function() {
// MainThing calls the subthing and modifies own data
this.parent.states.isVisible = this.subthing.method();
// and some other fun stuff
};
Also to avoid confusion, it is better not to use parent / child terms in this case. What you have here is aggregation or composition of objects, while parent / child are usually used to describe the inheritance.

Encapsulation in JavaScript with protoypes

Probably many of you tried to achieve encapsulation in JavaScript. The two methods known to me are:
a bit more common I guess:
var myClass(){
var prv //and all private stuff here
//and we don't use protoype, everything is created inside scope
return {publicFunc:sth};
}
and second one:
var myClass2(){
var prv={private stuff here}
Object.defineProperty(this,'prv',{value:prv})
return {publicFunc:this.someFunc.bind(this)};
}
myClass2.prototype={
get prv(){throw 'class must be created using new keyword'},
someFunc:function(){
console.log(this.prv);
}
}
Object.freeze(myClass)
Object.freeze(myClass.prototype)
So, as second option is WAY more convenient to me (specifically in my case as it visually separates construction from workflow) the question is - are there any serious disadvantages / leaks in this case? I know it allows external code to access arguments of someFunc by
myClass.protoype.someFunc.arguments
but only in case of sloppily executed callbacks (synchronously inside caller chain). Calling them with setTimeout(cb,0) breaks chain and disallows to get arguments as well as just returning value synchronously. At least as far as i know.
Did I miss anything? It's kind of important as code will be used by external, untrusted user provided code.
I like to wrap my prototypes in a module which returns the object, this way you can use the module's scope for any private variables, protecting consumers of your object from accidentally messing with your private properties.
var MyObject = (function (dependency) {
// private (static) variables
var priv1, priv2;
// constructor
var module = function () {
// ...
};
// public interfaces
module.prototype.publicInterface1 = function () {
};
module.prototype.publicInterface2 = function () {
};
// return the object definition
return module;
})(dependency);
Then in some other file you can use it like normal:
obj = new MyObject();
Any more 'protecting' of your object is a little overkill for JavaScript imo. If someone wants to extend your object then they probably know what they're doing and you should let them!
As redbmk points out if you need private instance variables you could use a map with some unique identifier of the object as the key.
So, as second option is WAY more convenient to me (specifically in my case as it visually separates construction from workflow) the question is - are there any serious disadvantages / leaks in this case?
Hm, it doesn't really use the prototype. There's no reason to "encapsulate" anything here, as the prototype methods will only be able to use public properties - just like your untrusted code can access them. A simple
function myClass2(){
var prv = // private stuff here
Object.defineProperty(this, 'prv', {value:prv})
// optionally bind the publicFunc if you need to
}
myClass2.prototype.publicFunc = function(){
console.log(this.prv);
};
should suffice. Or you use the factory pattern, without any prototypes:
function myClass2(){
var prv = // private stuff here
return {
prv: prv,
publicFunc: function(){
console.log(this.prv); // or even just `prv`?
}
};
}
I know it allows external code to access arguments of someFunc by
myClass.protoype.someFunc.arguments
Simply use strict mode, this "feature" is disallowed there.
It's kind of important as code will be used by external, untrusted user provided code.
They will always get your secrets if the code is running in the same environment. Always. You might want to try WebWorkers instead, but notice that they're still CORS-privileged.
To enforcing encapsulation in a language that doesn't properly support private, protected and public class members I say "Meh."
I like the cleanliness of the Foo.prototype = { ... }; syntax. Making methods public also allows you to unit test all the methods in your "class". On top of that, I just simply don't trust JavaScript from a security standpoint. Always have security measures on the server protecting your system.
Go for "ease of programming and testing" and "cleanliness of code." Make it easy to write and maintain, so whichever you feel is easier to write and maintain is the answer.

Javascript Module pattern - how to reveal all methods?

I have module pattern done like this:
var A = (function(x) {
var methodA = function() { ... }
var methodB = function() { ... }
var methodC = function() { ... }
...
...
return {
methA: methodA,
methB: methodB
}
})(window)
This code let's me call only methA and methB() on A which is what I want and what I like. Now the problem I have - I want to unit test it with no pain ot at least with minimal efforts.
First I though I can simply return this but I was wrong. It returns window object.(can someone explain why?).
Second - I found solution somewhere online - to include this method inside my return block:
__exec: function() {
var re = /(\(\))$/,
args = [].slice.call(arguments),
name = args.shift(),
is_method = re.test(name),
name = name.replace(re, ''),
target = eval(name);
return is_method ? target.apply(this, args) : target;
}
This method let's me call the methods like this: A.__exec('methA', arguments);
It is almost what I want, but quite ugly. I would prefer A.test.methA() where test would never be used in production code - just for revealing private methods.
EDIT
I see people telling me to test the big thing instead of the small parts. Let me explain. In my opinion API should reveal only the needed methods not a bunch of internal functions. The internals because of their small size and limited functionality are much easier to test then test the whole thing and guess which part gone wrong.
While I may be wrong, I would still like to see how I could return references to all the methods from the object itself :).
Answer to your first question(you return this, but it returns window, not the object you wanted): in javascript this inside the function returns global object unless this function is a method of the object.
Consider next examples:
1) this points to the global object ():
function(){
return this;
}
2) this points to the object:
var obj = {
value: "foo",
getThisObject: function(){
return this;
}
}
Your case is example #1, because you have a function, that returns an object. This function is not a method of any object.
The best answer to your second question is to test only public methods, but if
that is so important for you, I can propose next:
create your modules dynamically on server side.
How it works:
create separate scripts for functionality you want;
create tests for these separate scripts;
create method that will combine scripts into one however you want;
to load script, reference to the combining scripts method.
Hopefully, it can solve your problem. Good luck!
Why not use namespaces to add your modules and public methods to js engine. Like this:
window['MyApp']['MODULE1'] = { "METHOD1" : {}, "METHOD2" : {}};
I write modules like this Sample module in JavaScript.
And test it like this: Simple unit testing in JavaScript
The use of eval() is generally not good idea.

Pass a JavaScript function through JSON

I have a server side Python script that returns a JSON string containing parameters for a client side JavaScript.
# Python
import simplejson as json
def server_script()
params = {'formatting_function': 'foobarfun'}
return json.dumps(params)
This foobarfun should refer to a JavaScript function. Here is my main client side script
// JavaScript
function client_script() {
var xhr = new XMLHttpRequest();
xhr.open("GET", url, async=true);
xhr.onreadystatechange = function() {
if (xhr.readyState == 4) {
options = JSON.parse(xhr.responseText);
options.formatting_function();
}
};
xhr.send(null);
}
function foobarfun() {
//do_something_funny_here...
}
Of course, options.formatting_function() will complain that "a string is not callable" or something to that effect.
Upon using Chrome's Inspect Element, under the Resources tab, and navigating the left sidebar for XHR > query, I find that client_script interprets options as below. foobarfun is seen as a string.
// JavaScript
options = {"formatting_function": "foobarfun"}
I would have liked client_script to see options as
// JavaScript
options = {"formatting function": foobarfun}
Of course, doing the following within Python will have it complaining that it doesn't know anything about foobarfun
# Python
params = {'formatting_function': foobarfun}
QUESTION:
How should I prepare my JSON string from the server side so that the client script can interpret it correctly? In this case, I want foobarfun to be interpreted as a function object, not as a string.
Or maybe it's something I should do on the client side?
There's nothing you can do in the JSON to get the result you want because JSON has no concept of functions, it's purely a data notation. But there are things you can do client-side.
If your foobarfun function is a global function (which I would recommend against, we'll come to that), then you can call it like this:
window[options.formatting_function]();
That works because global functions are properties of the window object, and you can access properties either by using dotted notation and literals (window.foobarfun), or by using bracketed notation and strings (window["foobarfun"]). In the latter case, of course, the string doesn't have to be a string literal, it can be a string from a property -- your options.formatting_function property, for instance.
But I don't recommend using global functions, the window object is already very crowded. Instead, I keep all of my functions (or as many as possible, in some edge cases) within a master scoping function so I don't add anything to the global namespace:
(function() {
function foobarfun() {
}
})();
Now, if you do that, you can't access foobarfun on window because the whole point of doing it is to avoid having it be on window. Instead, you can create your own object and make it a property of that:
(function() {
var myStuff = {};
myStuff.foobarfun = foobarfun;
function foobarfun() {
}
function client_script() {
var xhr = new XMLHttpRequest();
xhr.open("GET", url, async=true);
xhr.onreadystatechange = function() {
if (xhr.readyState == 4) {
options = JSON.parse(xhr.responseText);
myStuff[options.formatting_function](); // <== using it
}
};
xhr.send(null);
}
})();
Frequently, rather than this:
myStuff.foobarfun = foobarfun;
function foobarfun() {
}
you'll see people write:
myStuff.foobarfun = function() {
};
I don't recommend that, either, because then your function is anonymous (the property on myStuff that refers to the function has a name, but the function doesn't). Giving your functions names is a good thing, it helps your tools help you (showing you the names in call stacks, error messages, etc.).
You might also see:
myStuff.foobarfun = function foobarfun() {
};
and that should be valid, it's correct JavaScript. But unfortunately, various JavaScript implementations have various bugs around that (which is called a named function expression), most especially Internet Explorer prior to IE9, which will create two completely different functions at two different times.
All of that said, passing the names of functions around between the client and server usually suggests that you want to step back and look at the design again. Data should drive logic, but not in quite such a literal way. That said, though, there are definitely valid use cases for doing this, you may well have one in your situation.
This question seems like it may be helpful for you:
How to execute a JavaScript function when I have its name as a string
I think what I would do is store references to these methods in an object literal, and then access them through properties.
For example, if I wanted to call foobarfun, among other functions
var my_functions = {
foobarfun: function(){
},
...
};
...
var my_fn = my_functions[options.formatting_function];
my_fn();
may be you can think the return string as javascript not json, by set MIME type text/javascript
There is a trick you can do. Analyzing the code of the json module in python, I notice that it is possible to make the serializer believe that it is an int what is serializing that way __str__ method will be executed.
import json
class RawJS(int):
def __init__(self):
self.code = "null"
def __repr__(self):
return self.code
def __str__(self):
return self.__repr__()
#classmethod
def create(cls, code):
o = RawJS()
o.code = code
return o
js = RawJS.create("""function() {
alert("hello world!");
}""")
x = {
"func": js,
"other": 10
}
print json.dumps(x)
the output is:
{"other": 10, "func": function() {
alert("hello world!");
}}
The disadvantage of this method is that the output is not a valid JSON in python so it can't be deserialized, but it is a valid javascript.
I don't know whether it's possible to trick Python's JSON serializer to do what you want, but you can use eval() function to execute any JavaScript code stored in a string variable.
Well I am not a Python expert but I know some java script. From server you can pass the information to client only as string. If in any case you want to pass any java script function then you can pass that function name as string and evaluate that string on client side.
Ex. If you passes xyz as string from server and on client side you call
var funcName = "xyz"; // pursuing that variable would pass here some how
eval (funcName); // This line would make a call to java script function xyz
Hint: You can think of java script eval utility as a reflection utility in java.
Thanks
shaILU
I'm in the same situation (python backend, needing to pass a JS function through JSON to frontend).
And while the answers here are honestly mind-blowing, I'm asking myself whether passing a javascript function that needs to designed in python and snuck into a JSON is the right answer, from a design perspective.
My specific scenario I need to tell JS how to format a string (is it a percentage, or does it need unitary breakdown such as thousands, millions etc)
And I'm finding it's cleaner to just do:
python:
chart = {
y_axis: {
"formatter" : "percent"
}
}
JS:
format_type = chart["y_axis"]["formatter"]
switch(format_type) {
case "percent":
format_func = (value) => value.toFixed(0) +'%';
break;
}
chart["y_axis"]["formatter"] = format_func
I find this is overall cleaner than attempting to define your JS functions in Python. It's also more decoupled than passing a specific function name from python to JS
Which I guess it's quite similar to #jkeesh's solution

Categories

Resources