JavaScript: How to compensate the lack of interfaces in delegate pattern? - javascript

Using RequireJS, I've build a small script. Depending on what is passed to a function, another file gets required – so it's something like a real simple factory probably. Imagine it like this:
function MyThing(a) {
if (a > 2) {
this.script = require['lib1'];
} else {
this.script = require['lib2'];
}
}
I other languages (I am coming from PHP), I can implement the same interface in both classes and be sure that lib1 and lib2 both share the functions defined in the interface.
If it worked like that in JavaScript, down the road I could call this.script.getId() or something like that and it would just work, no matter if lib1 or lib2 would be used.
Unfortunately, there are no interfaces in JavaScript (this Answer explained very well why it wouldn't make sense), so I am a bit stuck on how I should deal with it.
Of course I could create something like an interface that maps to both libs, but I have the feeling this would be the wrong way to deal with it.
What is the right approach towards the problem I am encountering?

You can just use duck typing, which exploits the fact that JS can convert anything and everything into a boolean value. If a function exists, its coerced boolean is true, so:
var lib = doMagicLoading();
// does the lib support the function we need?
if (lib.functionINeed) {
// it does, so we can simply move on along our intended code path.
lib.functionINeed();
} else {
// if it does not, that might be a problem, or it might not be. Warn us:
console.warn("library does not implement the function I Need...");
}
This is also how JavaScript code makes sure it does "the right thing" on different browsers that support different versions of JS, for instance. Step 1: test whether the function exists. Step 2: call it if does, or do something else if it doesn't.
Of course, in your case, as the programmer responsible for having working code you can actually guarantee well behaved code, because of course you wrote tests for your scripts, to make sure both code paths do the right thing, and you bundled your code using r.js before deploying. If not: it's time to start writing tests. Don't just trust developers to write working code -- that includes yourself =)

Related

How to test small snippets of javascript code without a web browser?

I am trying to teach beginners javascript. Here is what I am trying to do:
I will give them a snippet of code such as a function with something incorrect:
const square = function(x) {
return x + x; // Wrong! This will double the number, not square them.
};
They will submit the code back to me.
I would like to write some test cases which would run on their code and evaluate if they are correct.
Does it make sense for them to submit their code as js files, and then I have a node.js project which reads their files and passes values to their functions and tests the response ?
Or does it make more sense to use a unit testing framework like mocha and chai etc ?
Both options would work, i.e Node directly or a test framework with test suites on their machines.
If you really want to get all their submissions and evaluate them on your own, you can simply ask them to export a function with a predefined name, in this case of this square exercise, it could be export function square { ..., then you add all those files to a folder, list all them using the fs module with something like fs.readdir/fs.readdirSync and dynamically call require on each file and execute the square function.
Note that this approach means you'll be running untrusted code on your machine, e.g one could potentially delete a file on your system (or actually do everything else possible with the privileges you execute the program). But you suggested that it's a beginner class and it looks like you know everyone, so that approach may be acceptable and in that manner, you don't have to send test cases for everyone and teach them how to run them (although it would be good at some point to do so).

MongoDB map-reduce (via nodejs): How to include complex modules (with dependencies) in scopeObj?

I'm working on a complicated map-reduce process for a mongodb database. I've split some of the more complex code off into modules, which I then make available to my map/reduce/finalize functions by including it in my scopeObj like so:
const scopeObj = {
userCalculations: require('../lib/userCalculations')
}
function myMapFn() {
let userScore = userCalculations.overallScoreForUser(this)
emit({
'Key': this.userGroup
}, {
'UserCount': 1,
'Score': userScore
})
}
function myReduceFn(key, objArr) { /*...*/ }
db.collection('userdocs').mapReduce(
myMapFn,
myReduceFn,
{
scope: scopeObj,
query: {},
out: {
merge: 'userstats'
}
},
function (err, stats) {
return cb(err, stats);
}
)
...This all works fine. I had until recently thought it wasn't possible to include module code into a map-reduce scopeObj, but it turns out that was just because the modules I was trying to include all had dependencies on other modules. Completely standalone modules appear to work just fine.
Which brings me (finally) to my question. How can I -- or, for that matter, should I -- incorporate more complex modules, including things I've pulled from npm, into my map-reduce code? One thought I had was using Browserify or something similar to pull all my dependencies into a single file, then include it somehow... but I'm not sure what the right way to do that would be. And I'm also not sure of the extent to which I'm risking severely bloating my map-reduce code, which (for obvious reasons) has got to be efficient.
Does anyone have experience doing something like this? How did it work out, if at all? Am I going down a bad path here?
UPDATE: A clarification of what the issue is I'm trying to overcome:
In the above code, require('../lib/userCalculations') is executed by Node -- it reads in the file ../lib/userCalculations.js and assigns the contents of that file's module.exports object to scopeObj.userCalculations. But let's say there's a call to require(...) somewhere within userCalculations.js. That call isn't actually executed yet. So, when I try to call userCalculations.overallScoreForUser() within the Map function, MongoDB attempts to execute the require function. And require isn't defined on mongo.
Browserify, for example, deals with this by compiling all the code from all the required modules into a single javascript file with no require calls, so it can be run in the browser. But that doesn't exactly work here, because I need to be the resulting code to itself be a module that I can use like I use userCalculations in the code sample. Maybe there's a weird way to run browserify that I'm not aware of? Or some other tool that just "flattens" a whole hierarchy of modules into a single module?
Hopefully that clarifies a bit.
As a generic response, the answer to your question: How can I -- or, for that matter, should I -- incorporate more complex modules, including things I've pulled from npm, into my map-reduce code? - is no, you can not safely include complex modules in node code you plan to send to MongoDB for mapReduce jobs.
You mentioned the problem yourself - nested require statements. Now, require is sync, but if you have nested functions inside, these require calls would not be executed until call time, and MongoDB VM would throw at this point.
Consider the following example of three files: data.json, dep.js and main.js.
// data.json - just something we require "lazily"
false
// dep.js -- equivalent of your userCalculations
module.exports = {
isValueTrue() {
// The problem: nested require
return require('./data.json');
}
}
// main.js - from here you send your mapReduce to MongoDB.
// require dependency instantly
const calc = require('./dep.js');
// require is synchronous, the effectis the same if you do:
// const calc = (function () {return require('./dep.js')})();
console.log('Calc is loaded.');
// Let's mess with unwary devs
require('fs').writeFileSync('./data.json', 'false');
// Is calc.isValueTrue() true or false here?
console.log(calc.isValueTrue());
As a general solution, this is not feasible. While vast majority of modules will likely not have nested require statements, HTTP calls, or even internal, service calls, global variables and similar, there are those who do. You cannot guarantee that this would work.
Now, as a your local implementation: e.g. you require exactly specific versions of NPM modules that you have tested well with this technique and you know it will work, or you published them yourself, it is somewhat feasible.
However, even if this case, if this is a team effort, there's bound to be a developer down the line who will not know where your dependency is used or how, use globals (not on purpose, but by ommission, e.g they wrongly calculate this) or simply not know the implications of whatever they are doing. If you have strong integration testing suite, you could guard against this, but the thing is, it's unpredictable. Personally I think that when you can choose between unpredictable and predictable, almost always you should use predictable.
Now, if you have an explicitly stated purpose for a certain library to be used in MongoDB mapReduce, this would work. You would have to guard well against ommisions and problems, and have strong testing infra, but I would make certain the purpose is explicit before feeling safe enough to do this. But of course, if you're using something that is so complex that you need several npm packages to do, maybe you can have those functions directly on MongoDB server, maybe you can do your mapReducing in something better suited for the purpose, or similar.
To conclude: As a purposefuly built library with explicit mission statement that it is to be used with node and MongoDB mapReduce, I would make sure my tests cover all my mission-critical and important functionality, and then import such npm package. Otherwise I would not use nor recommend this approach.

qx.log.appender Syntax

When declaring qx.log.appender.Native or qx.log.appender.Console, my IDE (PyCharm) complains about the syntax:
// Enable logging in debug variant
if (qx.core.Environment.get("qx.debug"))
{
qx.log.appender.Native;
qx.log.appender.Console;
}
(as documented here)
The warning I get is
Expression statement is not assignment or call
Is this preprocessor magic or a feature of JavaScript syntax I'm not aware yet?
Clarification as my question is ambiguous:
I know that this is perfectly fine JavaScript syntax. From the comments I conclude that here's no magic JS behavior that causes the log appenders to be attached, but rather some preprocessor feature?!
But how does this work? Is it an hardcoded handling or is this syntax available for all classes which follow a specific convention?
The hints how to turn off linter warnings are useful, but I rather wanted to know how this "magic" works
Although what's there by default is legal code, I find it to be somewhat ugly since it's a "useless statement" (result is ignored), aside from the fact that my editor complains about it too. In my code I always change it to something like this:
var appender;
appender = qx.log.appender.Native;
appender = qx.log.appender.Console;
Derrell
The generator reads your code to determine what classes are required by your application, so that it can produce an optimised application with only the minimum classes.
Those two lines are valid Javascript syntax, and exist in order to create a reference to the two classes so that the generator knows to include them - without them, you wouldn't have any logging in your application.
Another way to create the references is to use the #use compiler hint in a class comment, eg:
/**
* #use(qx.log.appender.Native)
* #use(qx.log.appender.Console)
*/
qx.Class.define("mypackage.Application", {
extend: qx.application.Standalone,
members: {
main: function() {
this.base(arguments);
this.debug("Hello world");
}
}
});
This works just as well and there is no unusual syntax - however, in this version your app will always refer to the those log appenders, whereas in the skeleton you are using the references to qx.log.appender.Native/Console were surrounded by if (qx.core.Environment.get("qx.debug")) {...} which means that in the non-debug, ./generate.py build version of your app the log appenders would normally be excluded.
Whether you think this is a good thing or not is up to you - personally, these days I ship all applications with the log appenders enabled and working so that if someone has a problem I can look at the logs (you can write your own appender that sends the logs to the server, or just remote control the user's computer)
EDIT: One other detail is that when a class is created, it can have a defer function that does extra initialisation - in this case, the generator detects qx.log.appender.Console is needed so it makes sure the class is loaded; the class' defer method then adds itself as an appender to the Qooxdoo logging system
This is a valid JS syntax, so most likely it's linter's/preprocessor's warning (looks like something similar to ESLint's no-unused-expressions).
Edit:
For the other part of the question - this syntax most likely uses getters or (rather unlikely as it is a new feature) Proxies. MDN provides simple examples of how this works under the hood.
Btw: there is no such thing as "native" JS preprocessor. There are compilers like Babel or TypeScript's compiler, but they are separate projects, not related to the vanilla JavaScript.

How to Encode JavaScript files?

How can I encode my JavaScript file like DLL files?
I mean nobody can understand the code like Dll created from CS file in C#.
I need this because I want to give my functions to some company, but I do not want they to understand inside my functions....just can call the functions.
I see the jQuery files are encode to variables (a,b,c,d,....).
for example encode this simple code:
function CookiesEnabled() {
var result = false;
createCookie("testing", "Hello", 1);
if (readCookie("testing") != null) {
result = true;
eraseCookie("testing");
} return result;
};
There really isn't any way to encrypt/encode code like that in JS (at least I do not know of any way), but the standard way is to use good minifiers i.e. programs that collapse your code, remove comments rename local variables from good long names to stuff like width and height to stuff like a and b. And even re-structure your code so its as compact as possible. They usually end up non-human readable.
Minifing is usually even called JS compiling, but its not really. As with is a good one, well not going to go there, there are so many, but for my purposes I've been using the Microsoft official bundler:
http://www.asp.net/mvc/tutorials/mvc-4/bundling-and-minification
You should also check out this question (all of the big names that I know are all there.):
Is there a good JavaScript minifier?

Test-driven development of JavaScript web frontends

This might sound a little dumb, but I'm actually a bit confused how to approach JavaScript testing for web frontends. As far as I'm concerned, the typical 3-tier architecture looks like this:
Database tier
Application tier
Client tier
1 is of no concern in this question. 2 contains all the program logic ("business logic") 3 the frontend.
I do test-driven development for most projects, but only for the application logic, not the frontend. That is because testing the UI is difficult and unusual in TDD, and normally not done. Instead, all application logic is separated from UI, so that it is simple to test that logic.
The three tier architecture supports this: I can design my backend as a REST API which is called by my frontend. How does JS testing fit in? For the typical three-tier-architecture, JS (i.e. JS on the client) testing doesn't make much sense, does it?
Update:
I've changed the question's wording from "Testing JavaScript in web frontends" to "Test-driven development of JavaScript web frontends" to clarify my question.
Remember what the point of unit-testing is: to ensure a particular module of code reacts to some stimuli in an expected manner. In JS, a significant portion of your code, (unless you have some lifecycle framework like Sencha or YUI) will either be directly manipulating the DOM or making remote calls. To test these things, you simply apply traditional unit-testing techniques of dependency injection and mocking/stubbing. That means you must write each function, or class, that you want to unit-test to accept mocks of the dependent structures.
jQuery supports this by allowing you to pass an XML document into all traversal functions. Whereas you might normally write
$(function() { $('.bright').css('color','yellow'); }
you'll instead want to write
function processBright(scope) {
// jQuery will do the following line automatically, but for sake of clarity:
scope = scope || window.document;
$('.bright',scope).css('color','yellow');
}
$(processBright);
Notice we not only pull the logic out of the anonymous function and give it a name, we also make that function accept a scope parameter. When that value is null, the jQuery calls will still function as normal. However, we now have a vector for injecting a mock document that we can inspect after the function is invoked. The unit-test could look like
function shouldSetColorYellowIfClassBright() {
// arrange
var testDoc =
$('<html><body><span id="a" class="bright">test</span></body></html>');
// act
processBright(testDoc);
// assert
if (testDoc.find('#a').css('color') != 'bright')
throw TestFailed("Color property was not changed correctly.");
}
TestFailed could look like this:
function TestFailed(message) {
this.message = message;
this.name = "TestFailed";
}
The situation is similar with remote calls, though rather than actually injecting some facility, you could get away with a masking stub. Say you have this function:
function makeRemoteCall(data, callback) {
if (data.property == 'ok')
$.getJSON({url:'/someResource.json',callback:callback});
}
You would test it as such:
// test suite setup
var getJSON = $.getJSON;
var stubCalls = [];
$.getJSON = function(args) {
stubCalls[stubCalls.length] = args.url;
}
// unit test 1
function shouldMakeRemoteCallWithOkProperty() {
// arrange
var arg = { property: 'ok' };
// act
makeRemoteCall(arg);
// assert
if (stubCalls.length != 1 || stubCalls[0] != '/someResource.json')
throw TestFailed("someResource.json was not requested once and only once.");
}
// unit test 2
function shouldNotMakeRemoteCallWithoutOkProperty() {
// arrange
var arg = { property: 'foobar' };
// act
makeRemoteCall(arg);
// assert
if (stubCalls.length != 0)
throw TestFailed(stubCalls[0] + " was called unexpectedly.");
}
// test suite teardown
$.getJSON = getJSON;
(You can wrap that whole thing in the module pattern to not litter the global namespace.)
To apply all of this in a test-driven manner, you would simply write these tests first. This is a straightforward, no frills, and most importantly, effective way of unit-testing JS.
Frameworks like qUnit can be used to drive your unit-tests, but that is only a small part of the problem. Your code must be written in a test-friendly way. Also, frameworks like Selenium, HtmlUnit, jsTestDriver or Watir/N are for integration testing, not for unit-testing per se. Lastly, by no means must your code be object-oriented. The principles of unit-testing are easily confused with the practical application of unit-testing in object-oriented systems. They are separate but compatible ideas.
Testing Styles
I should note that two different testing styles are demonstrated here. The first assumes complete ignorance of the implementation of processBright. It could be using jQuery to add the color style, or it could be doing native DOM manipulation. I'm merely testing that the external behavior of the function is as expected. In the second, I assume knowledge of an internal dependency of the function (namely $.getJSON), and those tests cover the correct interaction with that dependency.
The approach you take depends on your testing philosophy and overall priorities and cost-benefit profile of your situation. The first test is relatively pure. The second test is simple but relatively fragile; if I change the implementation of makeRemoteCall, the test will break. Preferably, the assumption that makeRemoteCall uses $.getJSON is at least justified by the documentation of makeRemoteCall. There are a couple more disciplined approaches, but one cost-effective approach is to wrap dependencies in wrapper functions. The codebase would depend only on these wrappers, whose implementations can be easily replaced with test stubs at test-time.
There is a book titled Test-Driven JavaScript Development by Christian Johansen that might help you. I have only looked at some of the samples in the book (just downloaded a sample to Kindle the other day) but it looks like a great book that addresses this very issue. You might check it out.
(Note: I have no connection with Christian Johansen and no investment in sales of the book. Just looks like a good thing that addresses this problem.)
I have a similary architected application with JS client tier. In my case i use our company's own JS-framework to implement client tier.
This JS framework is created in OOP-style thus i can implement unit-testing for core classes and components. Also, to cover all user interactions (which can't be covered using unit-testing) i am using Selenium WebDriver to do an integration testing of framework visual components and test them under different browsers.
So, TDD can be applied to JavaScript development if code under test is written in OOP-manner. Also integration test is also possible (and can be used to do some kind of TDD).
Have a look at QUnit, as well, for unit tests of JavaScript methods and functions.
You can test your application from a user perspective with tools such as Rational Functional Tester, the HP tools or other equivalent software.
These tools test the application as if a user was sitting in front of it, but in an automated fashion. This means that you can test all three tiers at the same time, and especially the Javascript which may be difficult to test otherwise. Functional testing like this may help to find UI bugs and quirks with how the UI is using the data pushed out by your middle tier.
Unfortunately these tools are very expensive, so there may be other equivalents (and I'd be interested to know of such tools).
In our company we use jsTestDriver. It's a feature rich environment for testing frontend.
Take a look at it.

Categories

Resources