So I am a newbie in javascript and i had been going through some one else's code and I found this..
describe('deviceready', function() {
it('should report that it fired', function() {
spyOn(app, 'report');
app.deviceready();
expect(app.report).toHaveBeenCalledWith('deviceready');
});
});
What I don't understand is:
What exactly does the describe keyword do?
info:
- Its a phonegap application
- We are using the spine.js and jQuery libraries
Describe is a function in the Jasmine testing framework. It simply describes the suite of test cases enumerated by the "it" functions.
Also used in the mochajs framework.
Describe is not part of Javascript, it is a function defined in the library you used (namely Jasmine)
According to the Jasmine Documentation
The describe function is for grouping related specs, typically each test file has one at the top level. The string parameter is for naming the collection of specs, and will be concatenated with specs to make a spec's full name.
The "describe" block is used to group tests together in jest.
Have a look at the following link. Go to the scoping section, You shall understand why and how it is been used.
https://jestjs.io/docs/setup-teardown
jest also has describe function.
https://jestjs.io/docs/api#describename-fn
Related
In NetBeans I just added a comment with global, e.g.
/* global myLibrary */
in order to get it to recognize my functions.
However, this seems not to work in VS Code. For example, if I have a function named myFunction in the myLibrary module, when I click on "Go To Definition", it tells me that there was "No definition found for myFunction".
So how do I get VS Code to recognize my function?
I believe VSCode does not provide this feature by default. You will have to define the configuration yourself if your project has mixed content in it. I use VSCode for Angular 2 (using ng-cli) so it has all the setup generated for me (i can goto definition if it is valid).
have a look at these two links, hope this helps you:
https://code.visualstudio.com/docs/languages/javascript#_automatic-type-acquisition
https://code.visualstudio.com/docs/languages/jsconfig#_what-is-jsconfigjson
When declaring qx.log.appender.Native or qx.log.appender.Console, my IDE (PyCharm) complains about the syntax:
// Enable logging in debug variant
if (qx.core.Environment.get("qx.debug"))
{
qx.log.appender.Native;
qx.log.appender.Console;
}
(as documented here)
The warning I get is
Expression statement is not assignment or call
Is this preprocessor magic or a feature of JavaScript syntax I'm not aware yet?
Clarification as my question is ambiguous:
I know that this is perfectly fine JavaScript syntax. From the comments I conclude that here's no magic JS behavior that causes the log appenders to be attached, but rather some preprocessor feature?!
But how does this work? Is it an hardcoded handling or is this syntax available for all classes which follow a specific convention?
The hints how to turn off linter warnings are useful, but I rather wanted to know how this "magic" works
Although what's there by default is legal code, I find it to be somewhat ugly since it's a "useless statement" (result is ignored), aside from the fact that my editor complains about it too. In my code I always change it to something like this:
var appender;
appender = qx.log.appender.Native;
appender = qx.log.appender.Console;
Derrell
The generator reads your code to determine what classes are required by your application, so that it can produce an optimised application with only the minimum classes.
Those two lines are valid Javascript syntax, and exist in order to create a reference to the two classes so that the generator knows to include them - without them, you wouldn't have any logging in your application.
Another way to create the references is to use the #use compiler hint in a class comment, eg:
/**
* #use(qx.log.appender.Native)
* #use(qx.log.appender.Console)
*/
qx.Class.define("mypackage.Application", {
extend: qx.application.Standalone,
members: {
main: function() {
this.base(arguments);
this.debug("Hello world");
}
}
});
This works just as well and there is no unusual syntax - however, in this version your app will always refer to the those log appenders, whereas in the skeleton you are using the references to qx.log.appender.Native/Console were surrounded by if (qx.core.Environment.get("qx.debug")) {...} which means that in the non-debug, ./generate.py build version of your app the log appenders would normally be excluded.
Whether you think this is a good thing or not is up to you - personally, these days I ship all applications with the log appenders enabled and working so that if someone has a problem I can look at the logs (you can write your own appender that sends the logs to the server, or just remote control the user's computer)
EDIT: One other detail is that when a class is created, it can have a defer function that does extra initialisation - in this case, the generator detects qx.log.appender.Console is needed so it makes sure the class is loaded; the class' defer method then adds itself as an appender to the Qooxdoo logging system
This is a valid JS syntax, so most likely it's linter's/preprocessor's warning (looks like something similar to ESLint's no-unused-expressions).
Edit:
For the other part of the question - this syntax most likely uses getters or (rather unlikely as it is a new feature) Proxies. MDN provides simple examples of how this works under the hood.
Btw: there is no such thing as "native" JS preprocessor. There are compilers like Babel or TypeScript's compiler, but they are separate projects, not related to the vanilla JavaScript.
Here's what I'm looking for:
I want to use the wonderful features of SIMPLE mode minification while disabling just one specific feature (disable local function inline).
UPDATE: The answer is NO, it's not possible given my setup. But for me there is a workaround given I am using Grails.
As #Chad has explained below, "This violates core assumptions of the compiler". See my UPDATE3 below for more info.
IN QUESTION FORM:
I'm using CompilationLevel.SIMPLE_OPTIMIZATIONS which does everything I want, except that it's inlining my local functions.
Is there any way around this? For example, is there a setting I can place in my JS files to tell Google Closure not to inline my local functions?
It would be cool to have some directives at the top of my javascript file such as:
// This is a JS comment...
// google.closure.compiler = [inlineLocalFunctions: false]
I'm developing a Grails app and using the Grails asset-pipeline plugin, which uses Google Closure Compiler (hereafter, Compiler). The plugin supports the different minification levels that Compiler supports via the Grails config grails.assets.minifyOptions. This allows for 'SIMPLE', 'ADVANCED', 'WHITESPACE_ONLY'.
AssetCompiler.groovy (asset-pipeline plugin) calls ClosureCompilerProcessor.process()
That eventually assigns SIMPLE_OPTIMIZATIONS on the CompilerOptions object. And by doing so, CompilerOptions.inlineLocalFunctions = true as a byproduct (this is hard coded behavior in Compiler). If I were to use WHITESPACE_ONLY the result would be inlineLocalFunctions=false.
So by using Asset Pipeline's 'SIMPLE' setting, local functions are being inlined and that is causing me trouble. Example: ExtJS ext-all-debug.js which uses lots of local functions.
SO post Is it possible to make Google Closure compiler *not* inline certain functions? provides some help. I can use its window['dontBlowMeAway'] = dontBlowMeAway trick to keep my functions from inlining. However I have LOTS of functions and I'm not about to manually do this for each one; nor would I want to write a script to do it for me. Creating a JS model and trying to identity local functions doesn't sound safe, fun nor fast.
The previous SO post directs the reader to https://developers.google.com/closure/compiler/docs/api-tutorial3#removal, where the window['bla'] trick is explained, and it works.
Wow thanks for reading this long.
Help? :-)
UPDATE1:
Okay. While spending all the effort in writing this question, I may have a trick that could work. Grails uses Groovy. Groovy makes method call interception easy using its MetaClass API.
I'm going to try intercepting the call to:
com.google.javascript.jscomp.Compiler.compile(
List<T1> externs, List<T2> inputs, CompilerOptions options)
My intercepting method will look like:
options.inlineLocalFunctions=false
// Then delegate call to the real compile() method
It's bed time so I'll have to try this later. Even so, it would be nice to solve this without a hack.
UPDATE2:
The response in a similar post (Is it possible to make Google Closure compiler *not* inline certain functions?) doesn't resolve my problem because of the large quantity of functions I need inlined. I've already explained this point.
Take the ExtJS file I cited above as an example of why the above similar SO post doesn't resolve my problem. Look at the raw code for ext-all-debug.js. Find the byAttribute() function. Then keep looking for the string "byAttribute" and you'll see that it is part of strings that are being defined. I am not familiar with this code, but I'm supposing that these string-based values of byAttribute are later being passed to JS's eval() function for execution. Compiler does not alter these values of byAttribute when it's part of a string. Once function byAttribute is inlined, attempts to call the function is no longer possible.
UPDATE3: I attempted two strategies to resolve this problem and both proved unsuccessful. However, I successfully implemented a workaround. My failed attempts:
Use Groovy method interception (Meta Object Protocol, aka MOP) to intercept com.google.javascript.jscomp.Compiler.compile().
Fork the closure-compiler.jar (make my own custom copy) and modify com.google.javascript.jscomp.applySafeCompilationOptions() by setting options.setInlineFunctions(Reach.NONE); instead of LOCAL.
Method interception doesn't work because Compiler.compile() is a Java class which is invoked by a Groovy class marked as #CompileStatic. That means Groovy's MOP is not used when process() calls Google's Compiler.compile(). Even ClosureCompilerProcessor.translateMinifyOptions() (Groovy code) can't be intercepted because the class is #CompileStatic. The only method that can be intercepted is ClosureCompilerProcessor.process().
Forking Google's closure-compiler.jar was my last ugly resort. But just like #Chad said below, simply inserting options.setInlineFunctions(Reach.NONE) in the right place didn't resurrect my inline JS functions names. I tried toggling other options such as setRemoveDeadCode=false to no avail. I realized what Chad said was right. I would end up flipping settings around and probably destroying how the minification works.
My solution: I pre-compressed ext-all-debug.js with UglifyJS and added them to my project. I could have named the files ext-all-debug.min.js to do it more cleanly but I didn't. Below are the settings I placed in my Grails Config.groovy:
grails.assets.minifyOptions = [
optimizationLevel: 'SIMPLE' // WHITESPACE_ONLY, SIMPLE or ADVANCED
]
grails.assets.minifyOptions.excludes = [
'**ext-all-debug.js',
'**ext-theme-neptune.js'
]
Done. Problem solved.
Keywords: minify, minification, uglify, UglifyJS, UglifyJS2
In this case, you would either need to make a custom build of the compiler or use the Java API.
However - disabling inlining is not enough to make this safe. Renaming and dead code elimination will also cause problems. This violates core assumptions of the compiler. This local function is ONLY referenced from within strings.
This code is only safe for the WHITESPACE_ONLY mode of the compiler.
Use the function constructor
var fnc = new Function("param1", "param2", "alert(param1+param2);");
Closure will leave the String literals alone.
See https://developer.mozilla.org/de/docs/Web/JavaScript/Reference/Global_Objects/Function
I have some legacy code on my hands which was written with/relies upon the following stack:
jquery 1.8.1
jquery lazyload 1.8.0
d3 v2
Before I change anything in the code, I figured I'd write tests for it, so I can make nothing brakes :).
I chose the jasmine test framework because I'm familiar with rspec
I'm running into some issues, because the code I want to write tests for relies on jquery to define some "constants" e.g:
var WIDTH = $(document).width();
I guess there is no way around stubbing.
Should I include jquery in jasmine and try to spec the document?
Or not include jquery in jasmine and stub $?
I fear I might be going down the wrong direction and would much appreciate some guidance (code snippets much appreciated). Thanks for helping a noob out!
I would include jQuery and mock the functions that it calls. In your example, I would do.
spyOn($.fn, 'width').andReturn(300); //Return a value that you expect to be used
Jasmine spies have a property calls that is an array of all the calls and one thing that I have done is examine the calls entries and you can check the calling object. That being a jQuery object it has the property selector which you can expect to be equal to document
expect($.fn.width.calls[0].object.selector).toEqual(document);
Though remember you are trying to test the expected behavior of the code, not that each step of the code is completed as it is written. Trying to test that certain lines exists will prevent you from easily refactoring.
This might sound a little dumb, but I'm actually a bit confused how to approach JavaScript testing for web frontends. As far as I'm concerned, the typical 3-tier architecture looks like this:
Database tier
Application tier
Client tier
1 is of no concern in this question. 2 contains all the program logic ("business logic") 3 the frontend.
I do test-driven development for most projects, but only for the application logic, not the frontend. That is because testing the UI is difficult and unusual in TDD, and normally not done. Instead, all application logic is separated from UI, so that it is simple to test that logic.
The three tier architecture supports this: I can design my backend as a REST API which is called by my frontend. How does JS testing fit in? For the typical three-tier-architecture, JS (i.e. JS on the client) testing doesn't make much sense, does it?
Update:
I've changed the question's wording from "Testing JavaScript in web frontends" to "Test-driven development of JavaScript web frontends" to clarify my question.
Remember what the point of unit-testing is: to ensure a particular module of code reacts to some stimuli in an expected manner. In JS, a significant portion of your code, (unless you have some lifecycle framework like Sencha or YUI) will either be directly manipulating the DOM or making remote calls. To test these things, you simply apply traditional unit-testing techniques of dependency injection and mocking/stubbing. That means you must write each function, or class, that you want to unit-test to accept mocks of the dependent structures.
jQuery supports this by allowing you to pass an XML document into all traversal functions. Whereas you might normally write
$(function() { $('.bright').css('color','yellow'); }
you'll instead want to write
function processBright(scope) {
// jQuery will do the following line automatically, but for sake of clarity:
scope = scope || window.document;
$('.bright',scope).css('color','yellow');
}
$(processBright);
Notice we not only pull the logic out of the anonymous function and give it a name, we also make that function accept a scope parameter. When that value is null, the jQuery calls will still function as normal. However, we now have a vector for injecting a mock document that we can inspect after the function is invoked. The unit-test could look like
function shouldSetColorYellowIfClassBright() {
// arrange
var testDoc =
$('<html><body><span id="a" class="bright">test</span></body></html>');
// act
processBright(testDoc);
// assert
if (testDoc.find('#a').css('color') != 'bright')
throw TestFailed("Color property was not changed correctly.");
}
TestFailed could look like this:
function TestFailed(message) {
this.message = message;
this.name = "TestFailed";
}
The situation is similar with remote calls, though rather than actually injecting some facility, you could get away with a masking stub. Say you have this function:
function makeRemoteCall(data, callback) {
if (data.property == 'ok')
$.getJSON({url:'/someResource.json',callback:callback});
}
You would test it as such:
// test suite setup
var getJSON = $.getJSON;
var stubCalls = [];
$.getJSON = function(args) {
stubCalls[stubCalls.length] = args.url;
}
// unit test 1
function shouldMakeRemoteCallWithOkProperty() {
// arrange
var arg = { property: 'ok' };
// act
makeRemoteCall(arg);
// assert
if (stubCalls.length != 1 || stubCalls[0] != '/someResource.json')
throw TestFailed("someResource.json was not requested once and only once.");
}
// unit test 2
function shouldNotMakeRemoteCallWithoutOkProperty() {
// arrange
var arg = { property: 'foobar' };
// act
makeRemoteCall(arg);
// assert
if (stubCalls.length != 0)
throw TestFailed(stubCalls[0] + " was called unexpectedly.");
}
// test suite teardown
$.getJSON = getJSON;
(You can wrap that whole thing in the module pattern to not litter the global namespace.)
To apply all of this in a test-driven manner, you would simply write these tests first. This is a straightforward, no frills, and most importantly, effective way of unit-testing JS.
Frameworks like qUnit can be used to drive your unit-tests, but that is only a small part of the problem. Your code must be written in a test-friendly way. Also, frameworks like Selenium, HtmlUnit, jsTestDriver or Watir/N are for integration testing, not for unit-testing per se. Lastly, by no means must your code be object-oriented. The principles of unit-testing are easily confused with the practical application of unit-testing in object-oriented systems. They are separate but compatible ideas.
Testing Styles
I should note that two different testing styles are demonstrated here. The first assumes complete ignorance of the implementation of processBright. It could be using jQuery to add the color style, or it could be doing native DOM manipulation. I'm merely testing that the external behavior of the function is as expected. In the second, I assume knowledge of an internal dependency of the function (namely $.getJSON), and those tests cover the correct interaction with that dependency.
The approach you take depends on your testing philosophy and overall priorities and cost-benefit profile of your situation. The first test is relatively pure. The second test is simple but relatively fragile; if I change the implementation of makeRemoteCall, the test will break. Preferably, the assumption that makeRemoteCall uses $.getJSON is at least justified by the documentation of makeRemoteCall. There are a couple more disciplined approaches, but one cost-effective approach is to wrap dependencies in wrapper functions. The codebase would depend only on these wrappers, whose implementations can be easily replaced with test stubs at test-time.
There is a book titled Test-Driven JavaScript Development by Christian Johansen that might help you. I have only looked at some of the samples in the book (just downloaded a sample to Kindle the other day) but it looks like a great book that addresses this very issue. You might check it out.
(Note: I have no connection with Christian Johansen and no investment in sales of the book. Just looks like a good thing that addresses this problem.)
I have a similary architected application with JS client tier. In my case i use our company's own JS-framework to implement client tier.
This JS framework is created in OOP-style thus i can implement unit-testing for core classes and components. Also, to cover all user interactions (which can't be covered using unit-testing) i am using Selenium WebDriver to do an integration testing of framework visual components and test them under different browsers.
So, TDD can be applied to JavaScript development if code under test is written in OOP-manner. Also integration test is also possible (and can be used to do some kind of TDD).
Have a look at QUnit, as well, for unit tests of JavaScript methods and functions.
You can test your application from a user perspective with tools such as Rational Functional Tester, the HP tools or other equivalent software.
These tools test the application as if a user was sitting in front of it, but in an automated fashion. This means that you can test all three tiers at the same time, and especially the Javascript which may be difficult to test otherwise. Functional testing like this may help to find UI bugs and quirks with how the UI is using the data pushed out by your middle tier.
Unfortunately these tools are very expensive, so there may be other equivalents (and I'd be interested to know of such tools).
In our company we use jsTestDriver. It's a feature rich environment for testing frontend.
Take a look at it.