Test-driven development of JavaScript web frontends - javascript

This might sound a little dumb, but I'm actually a bit confused how to approach JavaScript testing for web frontends. As far as I'm concerned, the typical 3-tier architecture looks like this:
Database tier
Application tier
Client tier
1 is of no concern in this question. 2 contains all the program logic ("business logic") 3 the frontend.
I do test-driven development for most projects, but only for the application logic, not the frontend. That is because testing the UI is difficult and unusual in TDD, and normally not done. Instead, all application logic is separated from UI, so that it is simple to test that logic.
The three tier architecture supports this: I can design my backend as a REST API which is called by my frontend. How does JS testing fit in? For the typical three-tier-architecture, JS (i.e. JS on the client) testing doesn't make much sense, does it?
Update:
I've changed the question's wording from "Testing JavaScript in web frontends" to "Test-driven development of JavaScript web frontends" to clarify my question.

Remember what the point of unit-testing is: to ensure a particular module of code reacts to some stimuli in an expected manner. In JS, a significant portion of your code, (unless you have some lifecycle framework like Sencha or YUI) will either be directly manipulating the DOM or making remote calls. To test these things, you simply apply traditional unit-testing techniques of dependency injection and mocking/stubbing. That means you must write each function, or class, that you want to unit-test to accept mocks of the dependent structures.
jQuery supports this by allowing you to pass an XML document into all traversal functions. Whereas you might normally write
$(function() { $('.bright').css('color','yellow'); }
you'll instead want to write
function processBright(scope) {
// jQuery will do the following line automatically, but for sake of clarity:
scope = scope || window.document;
$('.bright',scope).css('color','yellow');
}
$(processBright);
Notice we not only pull the logic out of the anonymous function and give it a name, we also make that function accept a scope parameter. When that value is null, the jQuery calls will still function as normal. However, we now have a vector for injecting a mock document that we can inspect after the function is invoked. The unit-test could look like
function shouldSetColorYellowIfClassBright() {
// arrange
var testDoc =
$('<html><body><span id="a" class="bright">test</span></body></html>');
// act
processBright(testDoc);
// assert
if (testDoc.find('#a').css('color') != 'bright')
throw TestFailed("Color property was not changed correctly.");
}
TestFailed could look like this:
function TestFailed(message) {
this.message = message;
this.name = "TestFailed";
}
The situation is similar with remote calls, though rather than actually injecting some facility, you could get away with a masking stub. Say you have this function:
function makeRemoteCall(data, callback) {
if (data.property == 'ok')
$.getJSON({url:'/someResource.json',callback:callback});
}
You would test it as such:
// test suite setup
var getJSON = $.getJSON;
var stubCalls = [];
$.getJSON = function(args) {
stubCalls[stubCalls.length] = args.url;
}
// unit test 1
function shouldMakeRemoteCallWithOkProperty() {
// arrange
var arg = { property: 'ok' };
// act
makeRemoteCall(arg);
// assert
if (stubCalls.length != 1 || stubCalls[0] != '/someResource.json')
throw TestFailed("someResource.json was not requested once and only once.");
}
// unit test 2
function shouldNotMakeRemoteCallWithoutOkProperty() {
// arrange
var arg = { property: 'foobar' };
// act
makeRemoteCall(arg);
// assert
if (stubCalls.length != 0)
throw TestFailed(stubCalls[0] + " was called unexpectedly.");
}
// test suite teardown
$.getJSON = getJSON;
(You can wrap that whole thing in the module pattern to not litter the global namespace.)
To apply all of this in a test-driven manner, you would simply write these tests first. This is a straightforward, no frills, and most importantly, effective way of unit-testing JS.
Frameworks like qUnit can be used to drive your unit-tests, but that is only a small part of the problem. Your code must be written in a test-friendly way. Also, frameworks like Selenium, HtmlUnit, jsTestDriver or Watir/N are for integration testing, not for unit-testing per se. Lastly, by no means must your code be object-oriented. The principles of unit-testing are easily confused with the practical application of unit-testing in object-oriented systems. They are separate but compatible ideas.
Testing Styles
I should note that two different testing styles are demonstrated here. The first assumes complete ignorance of the implementation of processBright. It could be using jQuery to add the color style, or it could be doing native DOM manipulation. I'm merely testing that the external behavior of the function is as expected. In the second, I assume knowledge of an internal dependency of the function (namely $.getJSON), and those tests cover the correct interaction with that dependency.
The approach you take depends on your testing philosophy and overall priorities and cost-benefit profile of your situation. The first test is relatively pure. The second test is simple but relatively fragile; if I change the implementation of makeRemoteCall, the test will break. Preferably, the assumption that makeRemoteCall uses $.getJSON is at least justified by the documentation of makeRemoteCall. There are a couple more disciplined approaches, but one cost-effective approach is to wrap dependencies in wrapper functions. The codebase would depend only on these wrappers, whose implementations can be easily replaced with test stubs at test-time.

There is a book titled Test-Driven JavaScript Development by Christian Johansen that might help you. I have only looked at some of the samples in the book (just downloaded a sample to Kindle the other day) but it looks like a great book that addresses this very issue. You might check it out.
(Note: I have no connection with Christian Johansen and no investment in sales of the book. Just looks like a good thing that addresses this problem.)

I have a similary architected application with JS client tier. In my case i use our company's own JS-framework to implement client tier.
This JS framework is created in OOP-style thus i can implement unit-testing for core classes and components. Also, to cover all user interactions (which can't be covered using unit-testing) i am using Selenium WebDriver to do an integration testing of framework visual components and test them under different browsers.
So, TDD can be applied to JavaScript development if code under test is written in OOP-manner. Also integration test is also possible (and can be used to do some kind of TDD).

Have a look at QUnit, as well, for unit tests of JavaScript methods and functions.

You can test your application from a user perspective with tools such as Rational Functional Tester, the HP tools or other equivalent software.
These tools test the application as if a user was sitting in front of it, but in an automated fashion. This means that you can test all three tiers at the same time, and especially the Javascript which may be difficult to test otherwise. Functional testing like this may help to find UI bugs and quirks with how the UI is using the data pushed out by your middle tier.
Unfortunately these tools are very expensive, so there may be other equivalents (and I'd be interested to know of such tools).

In our company we use jsTestDriver. It's a feature rich environment for testing frontend.
Take a look at it.

Related

How to test minified code. Is it even necessary [duplicate]

We recently upgraded to a newer build of a JavaScript minification library.
After a significant amount of quality assurance work by the testing team, it was discovered that the new version of our minifier had an issue that changed the intention and meaning behind a block of code.
(Life lesson: don't upgrade JS minifiers unless you are really convinced you need the new version.)
The minifier is used for client side JavaScript code with a heavy emphasis on DOM related activity, not nearly as much "business logic".
A simplified example of what was broken by the minifier upgrade:
function process(count)
{
var value = "";
value += count; //1. Two consecutive += statements
value += count;
count++; //2. Some other statement
return value; //3. Return
}
Was minified incorrectly to the following:
function process(n){var t="";return t+n+n,n++,t}
While we could write some unit tests to catch some of the issues potentially, given that the JavaScript is heavy on DOM interactions (data input, etc.), it's very difficult to test thoroughly without user testing (non-automated). We'd pondered using a JS to AST library like Esprima, but given the nature of the changes that could be done to the minified code, it would produce far too many false positives.
We also considered trying to write representative tests, but that seems like a never-ending task (and likely to miss cases).
FYI: This is a very sophisticated web application with several hundred thousand lines of JavaScript code.
We're looking for a methodology for testing the minification process short of "just test everything again, thoroughly, and repeat." We'd like to apply a bit more rigor/science to the process.
Ideally, we could try multiple minifiers without fear of each breaking our code in new subtle ways if we had a better scientific method for testing.
Update:
One idea we had was to:
take minification with old version
beautify it
minify with new version,
beautify, and
visually diff.
It did seem like a good idea, however the differences were so common that the diff tool flagged nearly every line as being different.
Have you considered a unit test framework, such as QUnitjs ? It would be quite a bit of work to write the unit tests, but in the end you would have a repeatable test procedure.
Sounds to me like you need to start using automated Unit Tests within your CI (continuous integration environment). QUnit has been thrown around, but really QUnit is a pretty weak testing system, and its assertions are barebones at the minimum (it doesn't even really use a good assertion-based syntax). It only marginally qualifies as TDD and doesn't handle BDD very well either.
Personally I'd recommend Jasmine with JsTestDriver (it can use other UT frameworks, or its own, and is incredibly fast...though it has some stability issues that I really wish they'd fix), and setup unit tests that can check minification processes by multiple comparisons.
Some comparisons would likely need to be:
original code & its functionality behaves as expected
compared to minified code (this is where BDD comes in, expect the same functional performance/results in minified code)
I'd even go a step further (depending on your minification approach), and have a test that then beautifies the minification and does another comparison (this makes your testing more robust and more ensured of validity).
These kinds of tests are why you would probably benefit from a BDD-capable framework like Jasmine, as opposed to just pure TDD (ala the results you found of a visual diff being a mess to do), as you are testing behavior and comparisons and prior/post states of functionality/behavior, not just if a is true and still true after being parsed.
Setting up these Unit Tests could take a while, but its an iterative approach with that large of a codebase...test your initial critical choke points or fragile points fast and early, then extend tests to everything (the way I've always set my teams up is that anything from this point on is not considered complete and RC unless it has Unit Tests...anything old that has no Unit Tests and has to be updated/touched/maintained must have Unit Tests written when they are touched, so that you are constantly improving and shrinking the amount of untested code in a more manageable and logic way, while increasing your code coverage).
Once you have Unit Tests up and running in a CI, you can then tie them into your build process: fail builds that have no unit tests, or when the unit tests fail send out alerts, proactively monitor on each checkin, etc. etc. Auto-generate documentation with JSDoc3, etc. etc.
The issue you are describing is what CI and Unit Tests were built for, and more specifically in your case that approach minimizes the impact of the size of the codebase...the size doesn't make it more complex, just makes the duration to get testing working across the board longer.
Then, combine that with JSDoc3 and you are styling better than 90% of most front end shops. Its incredibly robust and useful to engineers at that point, and it becomes self-perpetuating.
I really could go on and on about this topic, there's a lot of nuance to how you approach it and get a team to rally behind it and make it self-forming and self-perpetuating, and the most important one being writing testable code...but from a concept level...write unit tests and automate them. Always.
For too long frontend devs have been half-assing development, not applying actual engineering rigor and discipline. As frontend has grown more and more powerful and hot, that has to change, and is changing. The concept of well tested, well covered, automated testing and continuous integration for frontend/RIA applications is one of the huge needs of that change.
You could look at something like Selenium Web Driver Which allows you to automate tests for web applications in various environments. There are some cloud hosted VM solutions for doing multi environment testing, so you don't get caught out when it works in Webkit but not in IE.
You should definitely look into using source maps to help with debugging minimized JavaScript. source maps will also work with supersets of JavaScript such as CoffeeScript or my new favorite TypeScript.
I use closure compiler which not only minifies, but also will create the source maps. not to mention it is the most agressive and produces the smallest files. Lastly you kinda have to know what's going on in minification and write compatible code, your example code could use some refactoring.
check out this article on source maps:
http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/
also check out the documentation for closure compiler, it's got suggestions on how to write better code for minification:
https://developers.google.com/closure/compiler/
Not a testing solution, but how about switching to TypeScript to write large JS apps like yours?
I tested it out with TypeScript and its default min engine and it works fine.
Assuming your count arg is an number.
The type script will be:
class ProcessorX {
ProcessX(count: number): string {
var value = '';
value += count.toString();
value += count.toString();
count++;
return value;
}
}
Which produces js like this:
var ProcessorX = (function () {
function ProcessorX() { }
ProcessorX.prototype.ProcessX = function (count) {
var value = '';
value += count.toString();
value += count.toString();
count++;
return value;
};
return ProcessorX;
})();
Then minified to:
var ProcessorX=function(){function n(){}return n.prototype.ProcessX=function(n){var t="";return t+=n.toString(),t+=n.toString(),n++,t},n}()
It's on jsfiddle.
If your count is a string, then this fiddle.
we use closure compiler in advanced mode which both minifies and changes code, as such we compile our unit tests as well, so you could consider minifying your tests alongside your code and running it like that.

How to test JavaScript minification output

We recently upgraded to a newer build of a JavaScript minification library.
After a significant amount of quality assurance work by the testing team, it was discovered that the new version of our minifier had an issue that changed the intention and meaning behind a block of code.
(Life lesson: don't upgrade JS minifiers unless you are really convinced you need the new version.)
The minifier is used for client side JavaScript code with a heavy emphasis on DOM related activity, not nearly as much "business logic".
A simplified example of what was broken by the minifier upgrade:
function process(count)
{
var value = "";
value += count; //1. Two consecutive += statements
value += count;
count++; //2. Some other statement
return value; //3. Return
}
Was minified incorrectly to the following:
function process(n){var t="";return t+n+n,n++,t}
While we could write some unit tests to catch some of the issues potentially, given that the JavaScript is heavy on DOM interactions (data input, etc.), it's very difficult to test thoroughly without user testing (non-automated). We'd pondered using a JS to AST library like Esprima, but given the nature of the changes that could be done to the minified code, it would produce far too many false positives.
We also considered trying to write representative tests, but that seems like a never-ending task (and likely to miss cases).
FYI: This is a very sophisticated web application with several hundred thousand lines of JavaScript code.
We're looking for a methodology for testing the minification process short of "just test everything again, thoroughly, and repeat." We'd like to apply a bit more rigor/science to the process.
Ideally, we could try multiple minifiers without fear of each breaking our code in new subtle ways if we had a better scientific method for testing.
Update:
One idea we had was to:
take minification with old version
beautify it
minify with new version,
beautify, and
visually diff.
It did seem like a good idea, however the differences were so common that the diff tool flagged nearly every line as being different.
Have you considered a unit test framework, such as QUnitjs ? It would be quite a bit of work to write the unit tests, but in the end you would have a repeatable test procedure.
Sounds to me like you need to start using automated Unit Tests within your CI (continuous integration environment). QUnit has been thrown around, but really QUnit is a pretty weak testing system, and its assertions are barebones at the minimum (it doesn't even really use a good assertion-based syntax). It only marginally qualifies as TDD and doesn't handle BDD very well either.
Personally I'd recommend Jasmine with JsTestDriver (it can use other UT frameworks, or its own, and is incredibly fast...though it has some stability issues that I really wish they'd fix), and setup unit tests that can check minification processes by multiple comparisons.
Some comparisons would likely need to be:
original code & its functionality behaves as expected
compared to minified code (this is where BDD comes in, expect the same functional performance/results in minified code)
I'd even go a step further (depending on your minification approach), and have a test that then beautifies the minification and does another comparison (this makes your testing more robust and more ensured of validity).
These kinds of tests are why you would probably benefit from a BDD-capable framework like Jasmine, as opposed to just pure TDD (ala the results you found of a visual diff being a mess to do), as you are testing behavior and comparisons and prior/post states of functionality/behavior, not just if a is true and still true after being parsed.
Setting up these Unit Tests could take a while, but its an iterative approach with that large of a codebase...test your initial critical choke points or fragile points fast and early, then extend tests to everything (the way I've always set my teams up is that anything from this point on is not considered complete and RC unless it has Unit Tests...anything old that has no Unit Tests and has to be updated/touched/maintained must have Unit Tests written when they are touched, so that you are constantly improving and shrinking the amount of untested code in a more manageable and logic way, while increasing your code coverage).
Once you have Unit Tests up and running in a CI, you can then tie them into your build process: fail builds that have no unit tests, or when the unit tests fail send out alerts, proactively monitor on each checkin, etc. etc. Auto-generate documentation with JSDoc3, etc. etc.
The issue you are describing is what CI and Unit Tests were built for, and more specifically in your case that approach minimizes the impact of the size of the codebase...the size doesn't make it more complex, just makes the duration to get testing working across the board longer.
Then, combine that with JSDoc3 and you are styling better than 90% of most front end shops. Its incredibly robust and useful to engineers at that point, and it becomes self-perpetuating.
I really could go on and on about this topic, there's a lot of nuance to how you approach it and get a team to rally behind it and make it self-forming and self-perpetuating, and the most important one being writing testable code...but from a concept level...write unit tests and automate them. Always.
For too long frontend devs have been half-assing development, not applying actual engineering rigor and discipline. As frontend has grown more and more powerful and hot, that has to change, and is changing. The concept of well tested, well covered, automated testing and continuous integration for frontend/RIA applications is one of the huge needs of that change.
You could look at something like Selenium Web Driver Which allows you to automate tests for web applications in various environments. There are some cloud hosted VM solutions for doing multi environment testing, so you don't get caught out when it works in Webkit but not in IE.
You should definitely look into using source maps to help with debugging minimized JavaScript. source maps will also work with supersets of JavaScript such as CoffeeScript or my new favorite TypeScript.
I use closure compiler which not only minifies, but also will create the source maps. not to mention it is the most agressive and produces the smallest files. Lastly you kinda have to know what's going on in minification and write compatible code, your example code could use some refactoring.
check out this article on source maps:
http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/
also check out the documentation for closure compiler, it's got suggestions on how to write better code for minification:
https://developers.google.com/closure/compiler/
Not a testing solution, but how about switching to TypeScript to write large JS apps like yours?
I tested it out with TypeScript and its default min engine and it works fine.
Assuming your count arg is an number.
The type script will be:
class ProcessorX {
ProcessX(count: number): string {
var value = '';
value += count.toString();
value += count.toString();
count++;
return value;
}
}
Which produces js like this:
var ProcessorX = (function () {
function ProcessorX() { }
ProcessorX.prototype.ProcessX = function (count) {
var value = '';
value += count.toString();
value += count.toString();
count++;
return value;
};
return ProcessorX;
})();
Then minified to:
var ProcessorX=function(){function n(){}return n.prototype.ProcessX=function(n){var t="";return t+=n.toString(),t+=n.toString(),n++,t},n}()
It's on jsfiddle.
If your count is a string, then this fiddle.
we use closure compiler in advanced mode which both minifies and changes code, as such we compile our unit tests as well, so you could consider minifying your tests alongside your code and running it like that.

Testing Javascript on Client end

I wrote a real time js app that has the following stack:
Node.js for server
Socket.io as a communication layer
JQuery on the front end to manipulate dom etc.
For #1, I have absolutely no problem testing. I am currently using nodeunit which is doing a fantastic job.
For #3, I am having a little trouble trying to figure out my approach to testing.
My browser side code is generally something like that:
var user = {
// Rendered by html
id: null,
roomId: null,
foo: null,
// Set by node server.
socket: null,
clientId: null,
...
}
$('button#ready').click(function() {
socket.emit('READY');
});
socket.on('INIT', function(clientId, userIds, serverName) {
user.clientId = clientId;
user.foo = (serverName == 'bar') ? 'bar' : 'baz';
});
The main part which I would like to test involves checking if the js on the browser side will react accordingly when the server fires a certain packet with specified arguments:
i.e.
user.foo = (serverName == 'bar') ? 'bar' : 'baz';
Any good recommendations on how to approach this?
Check out Mocha. It has a very nice support to run both in node and in the browser. I found it vastly more preferable to other options (Jasmine, Vows).
Also, instead of running a heavy-weight integration setup like Selenium, I'm running both server tests and browser tests in one process using Zombie. It allows for some beautiful integration workflows (like triggering something on the browser and then verifying effects on the server).
YMMV however as Zombie is relying on JSDOM (DOM re-implementation in Javascript). There are rough edges on yet patched/fixed, stuff breaks. If that's a problem, run Mocha in the browser using real browsers (harnessed through Selenium perhaps).
As much as Pivotal's support for jasmine is a bit hit and miss (minimal new development, lots of unanswered issues/pull-requests on their github), Jasmine is a really good tool for testing client-side code, mainly because of jasmine-jquery.
Jasmine's general approach is pretty solid and Jasmine-Jquery has a lot of great matchers for testing the DOM, as well as great DOM sandboxing.
I found testing on the client-side a challenge, mainly because I had to stop being so rigid and prescriptive in my tests.
Generally, you should approach client-side testing in a kind of 'fuzzy' way, testing the DOM hierarchy too specifically is a road to hell. Test things like, "Does the page contain these words" over "Does div id#my-div contain a ul with 3 li's with content that matches this regex"
The latter is how I started doing tests but I found it incredibly time consuming and fragile; if the designer (me) want to mess with the structure, it'd could unnecessarily break many tests. The only way to get around it is to create 'widgets' for each component, which would be ideal but as I said, very time consuming, it actually became a running joke at my office: "how many tests you done this week Tim? 2? 3? Wow 3 tests. Good work."
Anyway…
You can get 90% of the benefit of doing client-side testing by testing loosely, and focusing on what's important, such as workflow and data 'presence' over specific content in a specific location in the hierarchy on the page.
edit: Also, ensure you break the business logic into units that are independent of the DOM, as much as humanly possible. That makes your life a lot easier, and generally leads to better architecture, which is a plus.
edit 2: You might want to look into how the Rails world does this using Capybara/Cucumber or Selenium.

Are there any static Call-Graph and/or Control-Flow-Graph API for JavaScript? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Are there any Call-Graph and/or Control-Flow-Graph generators for JavaScript?
Call Graph - http://en.wikipedia.org/wiki/Call_graph
Control Flow Graph - http://en.wikipedia.org/wiki/Control_flow_graph
EDIT: I am looking specifically for a static tool that let me access the graph using some API/code
To do this, you need:
parsing,
name resolution (handling scoping)
type analysis (while JavaScript is arguably "dynamically typed", there are all kinds of typed constants including function constants that are of specific interest here)
control flow analysis (to build up the structure of the control flow graphs within methods)
data flow analysis (to track where those types are generated/used)
what amounts to global points-to analysis (to track function constants passed between functions as values to a point of application).
How to do it is pretty well documented in the compiler literature. However, implementing this matter of considerable sweat, so answers of the form of "you can use a parser result to get what you want" rather miss the point.
If you could apply all this machinery, what you'll get as a practical result is a conservative answer, e.g., "A may call B". This is all you know anyway, consider
void A(int x,y) { if (x>y) foo.B(); }
Because a tool sometime simply can't reason about complex logic, you may get "A may call B" even when the application designer knows it isn't possible:
void A(int x) // programmer asserts x<4
{ if (x>5) foo.B(); }
eval makes the problem worse, because you need to track string value results that arrive at eval commands and parse them to get some kind of clue as to what code is being evaled, and which functions that eval'd code might call. Things get really nasty if somebody passes "eval" in a string to eval :-{ You also likely need to model the program execution context; I suspect there are lots of browser APIs that include callbacks.
If somebody offers you tool that has all the necessary machinery completely configured to solve your problem out of the box, that would obviously be great. My suspicion is you won't get such an offer, because such a tool doesn't exist. The reason is all that infrastructure needed; its hard to build and hardly anybody can justify it for just one tool. Even an "optimizing JavaScript compiler" if you can find one likely won't have all this machinery, especially the global analysis, and what it does have is unlikely to be packaged in a form designed for easy consumption for your purpose.
I've been beating my head on this problem since I started programming in 1969 (some of my programs back then were compilers and I wanted all this stuff). The only way to get this is to amortize the cost of all this machinery across lots of tools.
My company offers the DMS Software Reengineering Toolkit, a package of generic compiler analysis and transformation machinery, with a variety of industrial strength computer langauge front-ends (including C, C++, COBOL and yes, JavaScript). DMS offers APIs to enable custom tools to be constructed on its generic foundations.
The generic machinery listed at the top of the message is all present in DMS, including control flow graph and data flow analysis available through a clean documented API. That flow analysis has to be tied to specific language front ends. That takes some work too, and so we haven't done it for all languages yet. We have done this for C [tested on systems of 18,000 compilation units as a monolith, including computing the call graph for the 250,000 functions present, including indirect function calls!], COBOL and Java and we're working on C++.
DMS has the same "JavaScript" parser answer as other answers in this thread, and viewed from just that perspective DMS isn't any better than the other answers that say "build it on top of a parser". The distinction should be clear: the machinery is already present in DMS, so the work is not one of implement the machinery and tying to the parser; it is just tying it to the parser. This is still a bit of work, but a hell of a lot less than if you just start with a parser.
In general it isn't possible to do this. The reason is that functions are first-class and dynamically typed, so for example:
var xs = some_function();
var each = another_function();
xs.map(each);
There are two unknowns. One is the version of 'map' that is called (since Javascript polymorphism can't be resolved statically in the general case), and the other is the value assigned to 'each', which also can't be statically resolved. The only static properties this code has are that some 'map' method is called on some function we got from 'another_function'.
If, however, that is enough information, there are two resources that might be helpful. One is a general-purpose Javascript parser, especially built using parser combinators (Chris Double's jsparse is a good one). This will let you annotate the parse tree as it is being constructed, and you can add a custom rule to invocation nodes to record graph edges.
The other tool that you might find useful (shameless plug) is a Javascript-to-Javascript compiler I wrote called Caterwaul. It lets you do pattern-matching against syntax trees and knows how to walk over them, which might be useful in your case. It could also help if you wanted to build a dynamic trace from short-term execution (probably your best bet if you want an accurate and detailed result).
WALA is an open-source program analysis framework that can build static call graphs and control-flow graphs for JavaScript:
http://wala.sourceforge.net/wiki/index.php/Main_Page
One caveat is that the call graphs may be missing some edges in the presence of eval, with, and other hard-to-analyze constructs. Also, we're still working on scalability; WALA can't yet analyze jquery in a reasonable amount of time, but some other frameworks can be analyzed. Also, our documentation for building JavaScript call graphs isn't great at the moment (improving it is on my TODO list).
We're actively working on this code, so if you try it and run into issues, you can email the WALA mailing list (https://lists.sourceforge.net/lists/listinfo/wala-wala) or contact me.
I think http://doctorjs.org/ may fit your needs. It has a nice JSON API, is available on github, backed up by mozilla. It's written in JS itself and generally does stuff pretty well (including dealing with polymorphism etc).
Here are a few solutions I can see:
Use Aptana Call Graph view
Aptana is an IDE based on Eclipse that permit you edit and debugging Javascript code.
Use Dynatrace
Dynatrace is a useful tool that let you live trace your code and see the call graph and hot spots.
Use Firebug
The famous developer addon on Firefox
Most of the call graphs generated here will be dynamic, ie you'll see the call graph for a given set of actions. If you're looking for static call graphs check Aptana first. Static call graphs may not let you see dynamic calls (code running through eval()).
For a js approach, check out arguments.callee.caller. It gives you the function that called the function you are in and you can recurse up the call stack. There is an example in this thread http://bytes.com/topic/javascript/answers/470251-recursive-functions-arguments-callee-caller.
Be aware that this may not work in all browsers and you may run in to some unexpected things when you get to the top of the "call stack" or native functions so use at your own risk.
My own example works in IE9 and Chrome 10 (didn't test any other browsers).
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title></title>
</head>
<body onload="Function1()">
<form id="form1" runat="server">
<div>
</div>
</form>
</body>
<script type="text/javascript">
function Function1()
{
Function2();
}
function Function2()
{
Function3();
}
function Function3()
{
Function4();
}
function Function4()
{
var caller = arguments.callee.caller;
var stack = [];
while (caller != null)
{
stack.push(caller);//this is the text of the function. You can probably write some code to parse out the name and parameters.
var args = caller.arguments; //this is the arguments for that function. You can get the actual values here and do something with them if you want.
caller = caller.caller;
}
alert(stack);
}
</script>
</html>
The closest thing you can get to a Call Graph is manipulating a full Javascript AST. This is possible with Rhino, take a look at this article: http://tagneto.blogspot.com/2010/03/requirejs-kicking-some-ast.html
Example from the post:
//Set up shortcut to long Java package name,
//and create a Compiler instance.
var jscomp = Packages.com.google.javascript.jscomp,
compiler = new jscomp.Compiler(),
//The parse method returns an AST.
//astRoot is a kind of Node for the AST.
//Comments are not present as nodes in the AST.
astRoot = compiler.parse(jsSourceFile),
node = astRoot.getChildAtIndex(0);
//Use Node methods to get child nodes, and their types.
if (node.getChildAtIndex(1).getFirstChild().getType() === CALL) {
//Convert this call node and its children to JS source.
//This generated source does not have comments and
//may not be space-formatted exactly the same as the input
//source
var codeBuilder = new jscomp.Compiler.CodeBuilder();
compiler.toSource(codeBuilder, 1, node);
//Return the JavaScript source.
//Need to use String() to convert the Java String
//to a JavaScript String.
return String(codeBuilder.toString());
}
In either Javascript or Java, you could walk the AST to build whatever type of call graph or dependency chain you'd like.
Not related directly to NodeJS, but generally to JavaScript, SAP has released a Web IDE related to HANA (but also accessible freely from the HANA Cloud - see more details here http://scn.sap.com/community/developer-center/cloud-platform/blog/2014/04/15/sap-hana-web-ide-online-tutorial).
In this Web IDE, there is a REST-based service that analyzes JavaScript content with primary focus (but not only) on creating a Call Graph. There are many consumers of that service, like Code Navigation.
Some more information here (see the Function Flow part):
http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features-sap-hana-web-based-development-workbench
Note: I am the main developer of this service.

Context agnostic JavaScript Testing Framework

I'm looking for a JavaScript Testing Framework that I can easily use in whatever context, be it browser, console, XUL, etc.
Is there such a framework, or a way to easily retrofit an existing framework so its context agnostic?
Edit: The testing framework should not be tied to any other framework such as jQuery or Prototype.js and shouldn't depend on a DOM (or document object) being present. I'm looking for something to test pure JavaScript.
OK, here's something I just brewed based on some earlier work. I hope this would meet your needs.
jsUnity
Lightweight Universal JavaScript Testing Framework
jsUnity is a lightweight universal JavaScript testing framework that is
context-agnostic. It doesn't rely on
any browser capabilities and therefore
can be run inside HTML, ASP, WSH or
any other context that uses
JavaScript/JScript/ECMAScript.
Sample usage inside HTML
<pre>
<script type="text/javascript" src="../jsunity.js"></script>
<script type="text/javascript">
function sampleTestSuite() {
function setUp() {
jsUnity.log("set up");
}
function tearDown() {
jsUnity.log("tear down");
}
function testLessThan() {
assertTrue(1 < 2);
}
function testPi() {
assertEquals(Math.PI, 22 / 7);
}
}
// optionally wire the log function to write to the context
jsUnity.log = function (s) { document.write(s + "</br>"); };
var results = jsUnity.run(sampleTestSuite);
// if result is not false,
// access results.total, results.passed, results.failed
</script>
</pre>
The output of the above:
2 tests found
set up
tear down
[PASSED] testLessThan
set up
tear down
[FAILED] testPi: Actual value does not match what's expected: [expected] 3.141592653589793, [actual] 3.142857142857143
1 tests passed
1 tests failed
Jasmine looks interesting.
According to the developers, it was written because none of the other JS test frameworks met all their needs in a single offering and not requiring things like DOM, jQuery, or the window object is one of the explicit design points.
I'm thinking of using it with env.js and Rhino/SpiderMonkey/V8/etc. to write client-side tests for my web apps which can be easily run in all the same situations as Python unit tests. (setup.py test, BuildBot, etc.)
you might want to check out YUI Test. It should work fine without a DOM.
These run wherever javascript is enabled.
scriptaculous unit-testing
QUnit
There's also JSpec
JSpec is a extremely small, yet very powerful testing framework. Utilizing its own custom grammar and pre-processor, JSpec can operate in ways that no other JavaScript testing framework can. This includes many helpful shorthand literals, a very intuitive / readable syntax, as well as not polluting core object prototypes.
JSpec can also be run in a variety of ways, such as via the terminal with Rhino support, via browsers using the DOM or Console formatters, or finally by using the Ruby JavaScript testing framework which runs browsers in the background, reporting back to the terminal.
I just got Hudson CI to run JasmineBDD, at least for pure javascript unit testing.
Is JsUnit any help? It's designed to run in a browser, but it looks relatively abstract.

Categories

Resources