How to test minified code. Is it even necessary [duplicate] - javascript

We recently upgraded to a newer build of a JavaScript minification library.
After a significant amount of quality assurance work by the testing team, it was discovered that the new version of our minifier had an issue that changed the intention and meaning behind a block of code.
(Life lesson: don't upgrade JS minifiers unless you are really convinced you need the new version.)
The minifier is used for client side JavaScript code with a heavy emphasis on DOM related activity, not nearly as much "business logic".
A simplified example of what was broken by the minifier upgrade:
function process(count)
{
var value = "";
value += count; //1. Two consecutive += statements
value += count;
count++; //2. Some other statement
return value; //3. Return
}
Was minified incorrectly to the following:
function process(n){var t="";return t+n+n,n++,t}
While we could write some unit tests to catch some of the issues potentially, given that the JavaScript is heavy on DOM interactions (data input, etc.), it's very difficult to test thoroughly without user testing (non-automated). We'd pondered using a JS to AST library like Esprima, but given the nature of the changes that could be done to the minified code, it would produce far too many false positives.
We also considered trying to write representative tests, but that seems like a never-ending task (and likely to miss cases).
FYI: This is a very sophisticated web application with several hundred thousand lines of JavaScript code.
We're looking for a methodology for testing the minification process short of "just test everything again, thoroughly, and repeat." We'd like to apply a bit more rigor/science to the process.
Ideally, we could try multiple minifiers without fear of each breaking our code in new subtle ways if we had a better scientific method for testing.
Update:
One idea we had was to:
take minification with old version
beautify it
minify with new version,
beautify, and
visually diff.
It did seem like a good idea, however the differences were so common that the diff tool flagged nearly every line as being different.

Have you considered a unit test framework, such as QUnitjs ? It would be quite a bit of work to write the unit tests, but in the end you would have a repeatable test procedure.

Sounds to me like you need to start using automated Unit Tests within your CI (continuous integration environment). QUnit has been thrown around, but really QUnit is a pretty weak testing system, and its assertions are barebones at the minimum (it doesn't even really use a good assertion-based syntax). It only marginally qualifies as TDD and doesn't handle BDD very well either.
Personally I'd recommend Jasmine with JsTestDriver (it can use other UT frameworks, or its own, and is incredibly fast...though it has some stability issues that I really wish they'd fix), and setup unit tests that can check minification processes by multiple comparisons.
Some comparisons would likely need to be:
original code & its functionality behaves as expected
compared to minified code (this is where BDD comes in, expect the same functional performance/results in minified code)
I'd even go a step further (depending on your minification approach), and have a test that then beautifies the minification and does another comparison (this makes your testing more robust and more ensured of validity).
These kinds of tests are why you would probably benefit from a BDD-capable framework like Jasmine, as opposed to just pure TDD (ala the results you found of a visual diff being a mess to do), as you are testing behavior and comparisons and prior/post states of functionality/behavior, not just if a is true and still true after being parsed.
Setting up these Unit Tests could take a while, but its an iterative approach with that large of a codebase...test your initial critical choke points or fragile points fast and early, then extend tests to everything (the way I've always set my teams up is that anything from this point on is not considered complete and RC unless it has Unit Tests...anything old that has no Unit Tests and has to be updated/touched/maintained must have Unit Tests written when they are touched, so that you are constantly improving and shrinking the amount of untested code in a more manageable and logic way, while increasing your code coverage).
Once you have Unit Tests up and running in a CI, you can then tie them into your build process: fail builds that have no unit tests, or when the unit tests fail send out alerts, proactively monitor on each checkin, etc. etc. Auto-generate documentation with JSDoc3, etc. etc.
The issue you are describing is what CI and Unit Tests were built for, and more specifically in your case that approach minimizes the impact of the size of the codebase...the size doesn't make it more complex, just makes the duration to get testing working across the board longer.
Then, combine that with JSDoc3 and you are styling better than 90% of most front end shops. Its incredibly robust and useful to engineers at that point, and it becomes self-perpetuating.
I really could go on and on about this topic, there's a lot of nuance to how you approach it and get a team to rally behind it and make it self-forming and self-perpetuating, and the most important one being writing testable code...but from a concept level...write unit tests and automate them. Always.
For too long frontend devs have been half-assing development, not applying actual engineering rigor and discipline. As frontend has grown more and more powerful and hot, that has to change, and is changing. The concept of well tested, well covered, automated testing and continuous integration for frontend/RIA applications is one of the huge needs of that change.

You could look at something like Selenium Web Driver Which allows you to automate tests for web applications in various environments. There are some cloud hosted VM solutions for doing multi environment testing, so you don't get caught out when it works in Webkit but not in IE.

You should definitely look into using source maps to help with debugging minimized JavaScript. source maps will also work with supersets of JavaScript such as CoffeeScript or my new favorite TypeScript.
I use closure compiler which not only minifies, but also will create the source maps. not to mention it is the most agressive and produces the smallest files. Lastly you kinda have to know what's going on in minification and write compatible code, your example code could use some refactoring.
check out this article on source maps:
http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/
also check out the documentation for closure compiler, it's got suggestions on how to write better code for minification:
https://developers.google.com/closure/compiler/

Not a testing solution, but how about switching to TypeScript to write large JS apps like yours?
I tested it out with TypeScript and its default min engine and it works fine.
Assuming your count arg is an number.
The type script will be:
class ProcessorX {
ProcessX(count: number): string {
var value = '';
value += count.toString();
value += count.toString();
count++;
return value;
}
}
Which produces js like this:
var ProcessorX = (function () {
function ProcessorX() { }
ProcessorX.prototype.ProcessX = function (count) {
var value = '';
value += count.toString();
value += count.toString();
count++;
return value;
};
return ProcessorX;
})();
Then minified to:
var ProcessorX=function(){function n(){}return n.prototype.ProcessX=function(n){var t="";return t+=n.toString(),t+=n.toString(),n++,t},n}()
It's on jsfiddle.
If your count is a string, then this fiddle.

we use closure compiler in advanced mode which both minifies and changes code, as such we compile our unit tests as well, so you could consider minifying your tests alongside your code and running it like that.

Related

Can I get performance testing in node.js like a unit testing

I need performance testing/tuning in nodejs.
In CI/CLI, like a unit test.(target to function call. not networking.)
I use mocha timeout() now.
var dictionary_handle;
it ("Dictionary.init_dictionary timeout", function(done) {
dictionary_handle = Dictionary.init_dictionary(dictionary_data);
done();
}).timeout(1000);
it ("Linad.initialize timeout", function(done) {
Linad.initialize(function(err){
done();
});
}).timeout(6000);
But it is not enough.
I need that function.
able using in CI.
execute multiple time
output performance metric information
I believe you're looking for a some form of microbenchmark module. There is a number of options and your requirements match them all so I cannot come up with the best candidate, you will need to perform your own investigation.
However given you have performance-testing tag added I can give you a generic piece of advice: when it comes to any form of performance testing - you need to make sure that your load test exactly mimics your application under test real life usage.
If your application under test would be a NodeJS-based web application - there are a lot of factors which need to be considered apart from single functions performance so if this is the case I would recommend considering a protocol-level based load testing tool, if you want to stick to JavaScript you can use something like k6 or consider another standalone free/open-source load testing solution which can simulate real users close enough with minimal efforts from your side.
Dmitri T is correct, you need to be careful with what and how you test. That being said, https://github.com/anywhichway/benchtest require almost no work to instrument existing unit tests, so it may be worth using.

How to test JavaScript minification output

We recently upgraded to a newer build of a JavaScript minification library.
After a significant amount of quality assurance work by the testing team, it was discovered that the new version of our minifier had an issue that changed the intention and meaning behind a block of code.
(Life lesson: don't upgrade JS minifiers unless you are really convinced you need the new version.)
The minifier is used for client side JavaScript code with a heavy emphasis on DOM related activity, not nearly as much "business logic".
A simplified example of what was broken by the minifier upgrade:
function process(count)
{
var value = "";
value += count; //1. Two consecutive += statements
value += count;
count++; //2. Some other statement
return value; //3. Return
}
Was minified incorrectly to the following:
function process(n){var t="";return t+n+n,n++,t}
While we could write some unit tests to catch some of the issues potentially, given that the JavaScript is heavy on DOM interactions (data input, etc.), it's very difficult to test thoroughly without user testing (non-automated). We'd pondered using a JS to AST library like Esprima, but given the nature of the changes that could be done to the minified code, it would produce far too many false positives.
We also considered trying to write representative tests, but that seems like a never-ending task (and likely to miss cases).
FYI: This is a very sophisticated web application with several hundred thousand lines of JavaScript code.
We're looking for a methodology for testing the minification process short of "just test everything again, thoroughly, and repeat." We'd like to apply a bit more rigor/science to the process.
Ideally, we could try multiple minifiers without fear of each breaking our code in new subtle ways if we had a better scientific method for testing.
Update:
One idea we had was to:
take minification with old version
beautify it
minify with new version,
beautify, and
visually diff.
It did seem like a good idea, however the differences were so common that the diff tool flagged nearly every line as being different.
Have you considered a unit test framework, such as QUnitjs ? It would be quite a bit of work to write the unit tests, but in the end you would have a repeatable test procedure.
Sounds to me like you need to start using automated Unit Tests within your CI (continuous integration environment). QUnit has been thrown around, but really QUnit is a pretty weak testing system, and its assertions are barebones at the minimum (it doesn't even really use a good assertion-based syntax). It only marginally qualifies as TDD and doesn't handle BDD very well either.
Personally I'd recommend Jasmine with JsTestDriver (it can use other UT frameworks, or its own, and is incredibly fast...though it has some stability issues that I really wish they'd fix), and setup unit tests that can check minification processes by multiple comparisons.
Some comparisons would likely need to be:
original code & its functionality behaves as expected
compared to minified code (this is where BDD comes in, expect the same functional performance/results in minified code)
I'd even go a step further (depending on your minification approach), and have a test that then beautifies the minification and does another comparison (this makes your testing more robust and more ensured of validity).
These kinds of tests are why you would probably benefit from a BDD-capable framework like Jasmine, as opposed to just pure TDD (ala the results you found of a visual diff being a mess to do), as you are testing behavior and comparisons and prior/post states of functionality/behavior, not just if a is true and still true after being parsed.
Setting up these Unit Tests could take a while, but its an iterative approach with that large of a codebase...test your initial critical choke points or fragile points fast and early, then extend tests to everything (the way I've always set my teams up is that anything from this point on is not considered complete and RC unless it has Unit Tests...anything old that has no Unit Tests and has to be updated/touched/maintained must have Unit Tests written when they are touched, so that you are constantly improving and shrinking the amount of untested code in a more manageable and logic way, while increasing your code coverage).
Once you have Unit Tests up and running in a CI, you can then tie them into your build process: fail builds that have no unit tests, or when the unit tests fail send out alerts, proactively monitor on each checkin, etc. etc. Auto-generate documentation with JSDoc3, etc. etc.
The issue you are describing is what CI and Unit Tests were built for, and more specifically in your case that approach minimizes the impact of the size of the codebase...the size doesn't make it more complex, just makes the duration to get testing working across the board longer.
Then, combine that with JSDoc3 and you are styling better than 90% of most front end shops. Its incredibly robust and useful to engineers at that point, and it becomes self-perpetuating.
I really could go on and on about this topic, there's a lot of nuance to how you approach it and get a team to rally behind it and make it self-forming and self-perpetuating, and the most important one being writing testable code...but from a concept level...write unit tests and automate them. Always.
For too long frontend devs have been half-assing development, not applying actual engineering rigor and discipline. As frontend has grown more and more powerful and hot, that has to change, and is changing. The concept of well tested, well covered, automated testing and continuous integration for frontend/RIA applications is one of the huge needs of that change.
You could look at something like Selenium Web Driver Which allows you to automate tests for web applications in various environments. There are some cloud hosted VM solutions for doing multi environment testing, so you don't get caught out when it works in Webkit but not in IE.
You should definitely look into using source maps to help with debugging minimized JavaScript. source maps will also work with supersets of JavaScript such as CoffeeScript or my new favorite TypeScript.
I use closure compiler which not only minifies, but also will create the source maps. not to mention it is the most agressive and produces the smallest files. Lastly you kinda have to know what's going on in minification and write compatible code, your example code could use some refactoring.
check out this article on source maps:
http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/
also check out the documentation for closure compiler, it's got suggestions on how to write better code for minification:
https://developers.google.com/closure/compiler/
Not a testing solution, but how about switching to TypeScript to write large JS apps like yours?
I tested it out with TypeScript and its default min engine and it works fine.
Assuming your count arg is an number.
The type script will be:
class ProcessorX {
ProcessX(count: number): string {
var value = '';
value += count.toString();
value += count.toString();
count++;
return value;
}
}
Which produces js like this:
var ProcessorX = (function () {
function ProcessorX() { }
ProcessorX.prototype.ProcessX = function (count) {
var value = '';
value += count.toString();
value += count.toString();
count++;
return value;
};
return ProcessorX;
})();
Then minified to:
var ProcessorX=function(){function n(){}return n.prototype.ProcessX=function(n){var t="";return t+=n.toString(),t+=n.toString(),n++,t},n}()
It's on jsfiddle.
If your count is a string, then this fiddle.
we use closure compiler in advanced mode which both minifies and changes code, as such we compile our unit tests as well, so you could consider minifying your tests alongside your code and running it like that.

Testing Javascript on Client end

I wrote a real time js app that has the following stack:
Node.js for server
Socket.io as a communication layer
JQuery on the front end to manipulate dom etc.
For #1, I have absolutely no problem testing. I am currently using nodeunit which is doing a fantastic job.
For #3, I am having a little trouble trying to figure out my approach to testing.
My browser side code is generally something like that:
var user = {
// Rendered by html
id: null,
roomId: null,
foo: null,
// Set by node server.
socket: null,
clientId: null,
...
}
$('button#ready').click(function() {
socket.emit('READY');
});
socket.on('INIT', function(clientId, userIds, serverName) {
user.clientId = clientId;
user.foo = (serverName == 'bar') ? 'bar' : 'baz';
});
The main part which I would like to test involves checking if the js on the browser side will react accordingly when the server fires a certain packet with specified arguments:
i.e.
user.foo = (serverName == 'bar') ? 'bar' : 'baz';
Any good recommendations on how to approach this?
Check out Mocha. It has a very nice support to run both in node and in the browser. I found it vastly more preferable to other options (Jasmine, Vows).
Also, instead of running a heavy-weight integration setup like Selenium, I'm running both server tests and browser tests in one process using Zombie. It allows for some beautiful integration workflows (like triggering something on the browser and then verifying effects on the server).
YMMV however as Zombie is relying on JSDOM (DOM re-implementation in Javascript). There are rough edges on yet patched/fixed, stuff breaks. If that's a problem, run Mocha in the browser using real browsers (harnessed through Selenium perhaps).
As much as Pivotal's support for jasmine is a bit hit and miss (minimal new development, lots of unanswered issues/pull-requests on their github), Jasmine is a really good tool for testing client-side code, mainly because of jasmine-jquery.
Jasmine's general approach is pretty solid and Jasmine-Jquery has a lot of great matchers for testing the DOM, as well as great DOM sandboxing.
I found testing on the client-side a challenge, mainly because I had to stop being so rigid and prescriptive in my tests.
Generally, you should approach client-side testing in a kind of 'fuzzy' way, testing the DOM hierarchy too specifically is a road to hell. Test things like, "Does the page contain these words" over "Does div id#my-div contain a ul with 3 li's with content that matches this regex"
The latter is how I started doing tests but I found it incredibly time consuming and fragile; if the designer (me) want to mess with the structure, it'd could unnecessarily break many tests. The only way to get around it is to create 'widgets' for each component, which would be ideal but as I said, very time consuming, it actually became a running joke at my office: "how many tests you done this week Tim? 2? 3? Wow 3 tests. Good work."
Anyway…
You can get 90% of the benefit of doing client-side testing by testing loosely, and focusing on what's important, such as workflow and data 'presence' over specific content in a specific location in the hierarchy on the page.
edit: Also, ensure you break the business logic into units that are independent of the DOM, as much as humanly possible. That makes your life a lot easier, and generally leads to better architecture, which is a plus.
edit 2: You might want to look into how the Rails world does this using Capybara/Cucumber or Selenium.

Javascript build system to handle large objects

I have a huge web app in Javascript and its starting to become something of a hassle to manage everything. I broke everything up into little files each with their own subsection of the app.
eg. if the app is named "myApp" I have a file called
myApp.ajax.js which contains
myApp.ajax = (function(){return {/*stuff*/}})();
and one called
myApp.canvas.js which contains
myApp.canvas = (function(){return {/*stuff*/}})();
so on and so forth. When I concatenate all the files and minify them I get a huge garbled mess of all of this together. So I was wondering, Is there a build system that would turn all of this int one single
var myApp = {
ajax: /*stuff*/,
canvas: /*stuff*/,
/*etc*/
}
when it compiles everything?
I ask because I ran a small test and noticed a serious perfomance decay when having each part of the object seperate. Test is here: http://jsperf.com/single-object-vs-multiples
I'm not sure if I get the point of this. Concatenating and minifying JavaScript will always end up in a fairly garbled mess (at least to read). Just make sure that you concatenate first and then minify, then the compiler you are using can optimize the whole thing.
And as for performance concerns. The JSPerf test told me, that the way you attach your modules is roughly 12% slower (at least in Firefox, seems to be different for V8). But you are doing it only once at application load - not 1000000 times. That can only make a difference somewhere in the microseconds at page load.
From what I gather from your question and what I have seen people tend to use make to collate multiple js files into one and then run compression against it etc see this
Yep, there is http://brunch.io/ which handles concatenation and stuff like that for you.

Test-driven development of JavaScript web frontends

This might sound a little dumb, but I'm actually a bit confused how to approach JavaScript testing for web frontends. As far as I'm concerned, the typical 3-tier architecture looks like this:
Database tier
Application tier
Client tier
1 is of no concern in this question. 2 contains all the program logic ("business logic") 3 the frontend.
I do test-driven development for most projects, but only for the application logic, not the frontend. That is because testing the UI is difficult and unusual in TDD, and normally not done. Instead, all application logic is separated from UI, so that it is simple to test that logic.
The three tier architecture supports this: I can design my backend as a REST API which is called by my frontend. How does JS testing fit in? For the typical three-tier-architecture, JS (i.e. JS on the client) testing doesn't make much sense, does it?
Update:
I've changed the question's wording from "Testing JavaScript in web frontends" to "Test-driven development of JavaScript web frontends" to clarify my question.
Remember what the point of unit-testing is: to ensure a particular module of code reacts to some stimuli in an expected manner. In JS, a significant portion of your code, (unless you have some lifecycle framework like Sencha or YUI) will either be directly manipulating the DOM or making remote calls. To test these things, you simply apply traditional unit-testing techniques of dependency injection and mocking/stubbing. That means you must write each function, or class, that you want to unit-test to accept mocks of the dependent structures.
jQuery supports this by allowing you to pass an XML document into all traversal functions. Whereas you might normally write
$(function() { $('.bright').css('color','yellow'); }
you'll instead want to write
function processBright(scope) {
// jQuery will do the following line automatically, but for sake of clarity:
scope = scope || window.document;
$('.bright',scope).css('color','yellow');
}
$(processBright);
Notice we not only pull the logic out of the anonymous function and give it a name, we also make that function accept a scope parameter. When that value is null, the jQuery calls will still function as normal. However, we now have a vector for injecting a mock document that we can inspect after the function is invoked. The unit-test could look like
function shouldSetColorYellowIfClassBright() {
// arrange
var testDoc =
$('<html><body><span id="a" class="bright">test</span></body></html>');
// act
processBright(testDoc);
// assert
if (testDoc.find('#a').css('color') != 'bright')
throw TestFailed("Color property was not changed correctly.");
}
TestFailed could look like this:
function TestFailed(message) {
this.message = message;
this.name = "TestFailed";
}
The situation is similar with remote calls, though rather than actually injecting some facility, you could get away with a masking stub. Say you have this function:
function makeRemoteCall(data, callback) {
if (data.property == 'ok')
$.getJSON({url:'/someResource.json',callback:callback});
}
You would test it as such:
// test suite setup
var getJSON = $.getJSON;
var stubCalls = [];
$.getJSON = function(args) {
stubCalls[stubCalls.length] = args.url;
}
// unit test 1
function shouldMakeRemoteCallWithOkProperty() {
// arrange
var arg = { property: 'ok' };
// act
makeRemoteCall(arg);
// assert
if (stubCalls.length != 1 || stubCalls[0] != '/someResource.json')
throw TestFailed("someResource.json was not requested once and only once.");
}
// unit test 2
function shouldNotMakeRemoteCallWithoutOkProperty() {
// arrange
var arg = { property: 'foobar' };
// act
makeRemoteCall(arg);
// assert
if (stubCalls.length != 0)
throw TestFailed(stubCalls[0] + " was called unexpectedly.");
}
// test suite teardown
$.getJSON = getJSON;
(You can wrap that whole thing in the module pattern to not litter the global namespace.)
To apply all of this in a test-driven manner, you would simply write these tests first. This is a straightforward, no frills, and most importantly, effective way of unit-testing JS.
Frameworks like qUnit can be used to drive your unit-tests, but that is only a small part of the problem. Your code must be written in a test-friendly way. Also, frameworks like Selenium, HtmlUnit, jsTestDriver or Watir/N are for integration testing, not for unit-testing per se. Lastly, by no means must your code be object-oriented. The principles of unit-testing are easily confused with the practical application of unit-testing in object-oriented systems. They are separate but compatible ideas.
Testing Styles
I should note that two different testing styles are demonstrated here. The first assumes complete ignorance of the implementation of processBright. It could be using jQuery to add the color style, or it could be doing native DOM manipulation. I'm merely testing that the external behavior of the function is as expected. In the second, I assume knowledge of an internal dependency of the function (namely $.getJSON), and those tests cover the correct interaction with that dependency.
The approach you take depends on your testing philosophy and overall priorities and cost-benefit profile of your situation. The first test is relatively pure. The second test is simple but relatively fragile; if I change the implementation of makeRemoteCall, the test will break. Preferably, the assumption that makeRemoteCall uses $.getJSON is at least justified by the documentation of makeRemoteCall. There are a couple more disciplined approaches, but one cost-effective approach is to wrap dependencies in wrapper functions. The codebase would depend only on these wrappers, whose implementations can be easily replaced with test stubs at test-time.
There is a book titled Test-Driven JavaScript Development by Christian Johansen that might help you. I have only looked at some of the samples in the book (just downloaded a sample to Kindle the other day) but it looks like a great book that addresses this very issue. You might check it out.
(Note: I have no connection with Christian Johansen and no investment in sales of the book. Just looks like a good thing that addresses this problem.)
I have a similary architected application with JS client tier. In my case i use our company's own JS-framework to implement client tier.
This JS framework is created in OOP-style thus i can implement unit-testing for core classes and components. Also, to cover all user interactions (which can't be covered using unit-testing) i am using Selenium WebDriver to do an integration testing of framework visual components and test them under different browsers.
So, TDD can be applied to JavaScript development if code under test is written in OOP-manner. Also integration test is also possible (and can be used to do some kind of TDD).
Have a look at QUnit, as well, for unit tests of JavaScript methods and functions.
You can test your application from a user perspective with tools such as Rational Functional Tester, the HP tools or other equivalent software.
These tools test the application as if a user was sitting in front of it, but in an automated fashion. This means that you can test all three tiers at the same time, and especially the Javascript which may be difficult to test otherwise. Functional testing like this may help to find UI bugs and quirks with how the UI is using the data pushed out by your middle tier.
Unfortunately these tools are very expensive, so there may be other equivalents (and I'd be interested to know of such tools).
In our company we use jsTestDriver. It's a feature rich environment for testing frontend.
Take a look at it.

Categories

Resources