We inherited a fairly large Javascript application and test suite and have recently started to have issues with memory usage during testing.
Whilst we attempt to fix the issues our test suite has, we'd like to stem the flow of new leaks into the application. Are there any tools that we can integrate with our CI build to get memory profiling? Even some basic memory allocation statistics would help us see whether a suite is eating through memory.
We're running Jasmine with PhantomJS. The closest I've been able to find is Chrome's window.performance.memory, but it's only for the whole of Chrome and seems like it might be quite volatile.
I am not aware of any automated memory statistics 3rd party applications for javascript that would work in CI. Take a look at Google's memory profiling post: https://developer.chrome.com/devtools/docs/javascript-memory-profiling
Related
Is there a way to write integration tests for WebExtension based Browser addons?
In addition to unit tests, I would like to write an integration test that fully loads an extension, performs some tests, and finally unloads it.
My own research:
I assume it is possible with Selenium, but from my experience Selenium can lead to flakey tests that are hard to maintain. I wonder if there is a lighter alternative. Could also be that Selenium is the tool of choice. I have to admit that I don't have much experience with testing Browser extensions.
For limited use cases, I have used mock-browser. But as far as I understand it, it is not possible to simulate loading and unloading extensions with it.
Example:
To get an idea what kind of tests I would like to automate, here is a small example of a manual test that we have:
Start a browser with the extension. If the extension loads correctly, it will start increase a counter periodically
(Manually) check whether the counter increases. If the counter increases, the test passes.
If the test environment support loading an extension, this manual test could be easily automated. The problem is just to setup an environment that allows to load the extension. Currently, we run our unit tests with Node and using Mocha as a test framework.
Is there a way to have a JavaScript code that would run as an automated test and measure a web-app memory consumption?
What I am looking for is a way to prevent memory leaks in an angular app by having automated tests as a part of CI build process informing me about memory issues as soon as they arise. I already have many JavaScript tests running in PhantomJS via Jasmine.
I would get that information from the operating system by grepping ps aux for the phantom process.
We are using phantomjs to run our qunit tests page on our TFS build server. Our version of test runner is built from below example
https://github.com/ariya/phantomjs/blob/master/examples/run-qunit.js
Over a period of time number of tests increased from hundreds to couple of thousands and on a fine day phantomjs started crashing. It literally dies saying upload the dump and when you see the dump it 0kb !!
When we took a closer look at it on process explorer we found that memory consumption by phantomjs keeps going up as phantomjs is running tests and eventually crashes somewhere 833MB.
Yes the same amount of memory was being utilized by chrome and IE ! And Yes-Yes our tests were leaking memory :(. We did fixed it, memory utilization is lowered by 50% on chrome and IE and we expected phantomjs will handle it now. But no, phantomjs still kept crashing, process explorer shows same memory consumption.
http://phantomjs.org/api/webpage/method/close.html
According to above documentation phantomjs releases heap allocation just on close ? Could that be the reason why our fixed test consumed less memory on chrome but not phantomjs ? And last how to fix this ? How to make phantomjs keep garbage collecting javascript objects to reduce heap allocation ?
Update 1 - 07/28
We took a work around. I did modified my script to execute my tests module by module. In loop after executing all tests for a module I call page.close so it releases the memory for each module and never keeps building the dead heap of objects. Not closing this question since since its a workaround and not a solution. Hope creators will fix this sometime.
There is a static method, QWebPageSettings::clearMemoryCache, that invokes WebKit's garbage collection. However, it clears all QWebPage memory cache for every instantiated QWebPage object and is therefore, currently, unsuitable for including as an option in PhantomJS.
The Github pull request is available here:
https://github.com/ariya/phantomjs/pull/11511
Here's the Google Groups discussion:
https://groups.google.com/forum/#!msg/phantomjs/wIDp9J7B-bE/v5U31_mTbswJ
Until a workaround is available, you might break up your unit tests into blocks on separate pages. It will take a change to QtWebkit's implementation and how memory/cache is handled across QWebPage objects.
Update September 2014:
https://github.com/ariya/phantomjs/commit/5768b705a0
It looks like support for clearing memory cache was added, but there is a note about my original comment in the commit.
I managed to work around it by setting the /LARGEADDRESSAWARE flag
If you have visual studio installed, run from a visual studio command prompt
editbin /LARGEADDRESSAWARE <pathto>/PhantomJS.exe
If I am already unit-testing my javascript locally during development and before pushing changes up to a git repo are there any compelling reasons to do unit testing on a staging server before pushing the changes over to the live server?
It seems redundant.
The answer depends on how your tests are put together.
If they are pure unit tests (i.e. each test is exercising a single, isolated unit of code, with all dependencies mocked out) then there is little benefit to doing this, as the execution environment for each test should be identical both on your local development machine and on your staging server.
With proper unit tests, the only situations I can think of where you would catch issues on the staging server that were not found on your development machine are where a different operating system or Javascript interpreter are causing differences in behavior (however these types of issues should be quite rare). If you did find other reasons for unit tests to behave differently in these two environments (for example, as #Thilo mentions, because you have dirty code on your development machine, or because you depend on libraries that are on your development machine but not your staging server) then that indicates there is something wrong with your software development process which you need to address to make sure that you are setting up the environment your software runs in reliably.
However, if by unit tests you are talking about higher level automated tests (e.g. system tests that run through the browser) - which is a distinction that some people fail to make as they (incorrectly) refer to all automated tests as unit tests - then there is is likely some benefit to running these on the staging server. Often development and production setups will use different technologies and/or configurations for web servers and database servers, which can lead to differences in behavior which can only be picked up by testing on your staging server.
One final note, you should make sure that you do some form of high-level testing before pushing your changes live to production, as unit tests alone will not catch all of your problems. Ideally this would be a complete set of automated system-level acceptance tests that test all of the features of your software and exercise the whole software stack in an environment that matches production. However, at a minimum someone should be manually executing a set of tests across your key features on a staging server before your changes go live.
Probably no benefit for just unit-testing (but if they are automated, then there is no real cost, either, and maybe they do catch something, such as inconsistent/incomplete deployments).
Also, the staging server is guaranteed to not contain any "dirty" code that your development machine might have (something you forgot to commit, some "unrelated" files, etc).
But there are other types of (more integrated) tests that you might want to do on the staging server.
Staging servers are beneficial if you have multiple testers looking at your code. Pushing a branch will give those testers a good idea of what they are looking at. This allows each tester to make sure the code is functioning as intended and get a glimpse at it before it hits the masses.
Sometimes the best way to test your code is to allow a bunch of people to try and break it. These people being the ones that are on your side.
And like #Jeff Ward said, it will not always mirror your machine. I have always learned that the more you test the less that can go wrong.
Running all unit tests and integration tests on your staging server is a great way to ensure that the last checkin did not disturb the code. Of course, programmers should not commit code that is not tested properly. However, we all sometimes forget that the code we write can effect other code, and thus forget to run all tests.
Let's say you log certain things on your nodejs app or on a browser.
How much does this affect performance / CPU usage vs removing all these logs in production?
I'm not asking because I'm just curious how much "faster" would things run without it so I can take that into account when developing.
It can cost a lot, specially if your application is hardly based on a loop, like a game or a GUI app that gets updated in real time.
Once I developed an educational physics app using <canvas>, and with logs activated withing the main application loop the frame rate easily dropped from 60fps to 28fps! That was quite catastrophic for the user experience.
Overall tip for browser applications is: Do not use console.log() in production for loop based applications specially the ones that need to update a graphical interface within the loop.
For Node: is node.js' console.log asynchronous?
I imagine it's implemented similar in some of the browsers.
I'm not familiar with node.js, however it's typically not a good thing to log anything except critical errors in a production environment. Unless node.js offers a logging utility like log4j, you should look at something like log4js (haven't used, just first google response)