Report all used Javascript functions - javascript

I am working on a WYSIWIG animation editor for designing sliders / ad banners that includes a lot of dependencies, which also means a lot of extra bloated code that isn't ever used. I am hoping to run a report on the code that helps me identify the important things. I have a couple cool starts that will search through javascript for all functions and return each function by parts:
https://regex101.com/r/sXrHLI/1
Then some PHP that will sort it by size:
Sort preg_match_all by named group size
The thought is that by identifying large functions that aren't being used, we can remove them. My next step is to identify the function tree of what functions are invoked on document load, and then which are loaded and invoked on actions such as clicks / mouseovers and so on.
While I have this handy function that tells me all functions loaded in the DOM, it isn't enough:
var functionArray;
$(document).ready(function(){
var objs = [];
for (var obj in window){
if(window.hasOwnProperty(obj) && typeof window[obj] === 'function') objs.push(obj);
};
console.log(obj));
});
I am looking for a solution that I can use to script in PHP / shell to emulate page load - now here is where my knowledge of terminology fails me, am I looking for "Call Stack", do I need a timeline, interpreter, framework, engine or a parser?
I next need to emulate a click / hover event on all elements, or all elements that match something like this regex:
(?|\$\(['"](\.\w*)["']|getElementsByClassName\('(\w*)'\))
(?|\$\(['"](\#\w*)["']|getElementsById\('(\w*)'\))
to find any events that trigger functions so I can make a master list of functions that need to be in the final code.

I was watching a talk from a Google Developer and I thought of your post. The following link has more intel on Dev Tools Coverage Profiler, but the following is the high level.
Google dev tools ships out a neat feature for generating reports on used and unused JS and CSS code -- which is right along the essence of what you were searching to do (just a slightly different medium -- it'd be a bit harder to automate, but it otherwise contains, I believe, exactly what you were looking for).
Open Dev tools and then open up the ellipse in the bottom left corner (see image) and then click the record button [see image 1]. Go through the steps you want to capture. You'll get an interactive screen to which you can go through all the code and see what was used (green) and what was not used (red) [see image 2]
Image 1 - Ellipse drop down to get to coverage tool
Image 2 - Full screenshot of the interactive report for this StackOverflow page while editing this post.

I'd suggest you to take a look at this tool:
Istanbul
With it you can do the following:
create an instrumented version of your code
deploy it on the server and run the manual test (coverage information is collected in one of the global variables inside the browser)
copy coverage information into a file (its an lcov data as far as I remember)
generate code coverage report with it
If you feel like going further, you can actually use something like jvm-cucumber with selenium to automate UI tests. You will need to dump the coverage data every time you reload the page, however. Then you will have to combine coverage from different runs and use this combined lcov data to generate the overall report.
The whole process is a bit heavy, but it is way better then starting to reinvent it.
Even more, you can combine this data with unit test coverage information to produce joint reports.
As a step further, you may want to setup sonar server so that you store multiple versions of the coverage reports and compare differences between tests.

Related

ESLint: Environment-specific rules [duplicate]

I am collaborating on a git-sourced, maven-managed Java project with differing code styling preferences with users using multiple IDE's (note 1).
Is there a tool or IDE configuration that will allow code to be viewed and edited using style-1, but committed to SCM using style-2?
My research points me to 'no', but a solution combining git hooks and Checkstyle/jrefactory might be possible.
So if 'no' to above, is there a tool/process that will perform the TBD process actions below?
The checkout process flow for User1 would be:
git pull
TBD process formats code to User1 style-1
User1 works in their preferred IDE with style-1 settings
The commit workflow for User1 would be:
User1 is ready to commit/push code
TBD process formats code to standard format style-standard
git push
Note 1: multiple IDE's = Eclipse, IntelliJ, Netbeans.
Note 2: My question differs from this question in that I'd like to focus on an IDE-related solution, since forcing the minority of standards-divergent users is probably a more efficient solution.
Note 3: Acknowledging that this shouldn't be done for best-practices-reasons. However, if you grant that it's time expect more flexibility from our IDEs and SCMs, this question is intended to explore those solutions.
First of all, you really shouldn't do that. Codestyle wars are bad for any project, and it is best to decide upon one codestyle that everybody must use. It is simple to configure IDEs to automatically apply the specified codestyle at every filesave, so the developers don't have to write code in the target codestyle themselves, they can let the IDE do that for them. True, this doesn't solve the fact that they'll have to read code in a codestyle they don't yet like, but it's a lot safer than having invisible automatic code changes; that's a major source of bugs.
Maybe you can use Eclipse's code formatter from the command line to apply a different codestyle. You'd have to set up git hooks, make sure everybody has Eclipse available, and provide the proper configuration files for their preferred codestyle. You'd need hooks both for post-checkout and pre-commit, one to set up the user's codestyle, the other to commit in the central codestyle. To go one step further, you can play with the index to add the formatted code so that it doesn't include style differences in git diff (although they will show up in git diff --staged).
Again, you shouldn't do that.
I agree with Sergiu Dumitriu in this not being a very good idea. But still git provides exactly what you are looking for. Even though this will only work if your central coding style is very well defined and strictly followed. Here’s how it works:
Git provides smudge/clean filters. They allow you to pass all code through a so-called “smudge” filter on checkout and reverse that with a “clean” filter when code is added to the staging area. These filters are set in .gitattributes, and there is a repository-local version of that file available in .git/info/attributes.
So you set your smudge filter to a tool that will change the code to your personal coding style on checkout:
And your clean filter will convert the code back to the central coding style on checkin (more precisely: when file are staged):
It is very important, that smudge -> clean is a no-op / generates the original file again. Otherwise you will still check in format changes every time you change a file.
Using smudge and clean filters will retain all the functionality of git (including git diff etc). You can find the full docu in git help attributes

Protractor Accessibility reporting

I am trying to use Accessibility plugin that comes with Protractor. From what I see it does checking for a11y of last page that I am located.
Is there a way to have 2 test scripts executed one after another one and provide different reports or put all in one report but separated.
Example:
access.js
access1.js
Output file:
resultJsonOutputFile: 'result/result.json'
I tried to this way in conf.js:
specs: ['../test/access.js', '../test/access1.js'],
or
specs: ['../test/access*.js'],
but still get result for last script executed
I tried also creating suites:
suites: {
homepage: '../test/homepage/access.js',
catalogpage: '../test/catalogpage/access1.js'
},
but when I check JSON file, if 2 scripts executed, then 1st one is ok with no issues and provides error for 2nd script. However, if to run 1st script alone, Protractor provides errors
Also I tried to create in one js file as different scenarios, but still same issue
With the current implementation, the accessibility plugin is set to run exactly once per invocation of the Protractor runner, on the last page. So unfortunately, no modification of the suites or test files will make it run more than once.
You can create separate configuration files for each set of test files you'd like to run, or using shardTestFiles to make sure that each file is run in its own process. See the referenceConf for more details on sharding.
Alternatively, you could use aXe to do your accessibility testing. In order to use it with e2e tests in protractor and Webdriver, do the following:
npm install --save-dev axe-webdriverjs
Then in your e2e test files, you do:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
to get hold of the AxeBuilder and then wherever you need to run a test, you:
AxeBuilder(browser.driver)
.analyze(function (results) {
expect(results.violations.length).toBe(0);
});
The above example is using Jasmine but you can extrapolate for any other assertion library.
Also: there is a sample project you can clone and run here https://github.com/dylanb/UITestingFramework
Disclaimer: I am associated with the aXe project and therefore not neutral
I ran into that problem too - as another poster stays the plugin isn't really designed to operate that way.
I wrote a derivative of that plugin which does what you're looking for - protractor-axe-report-plugin.
You make a call to runAxeTest (or runAxeTestWithSelector) whenever you have a page open in the browser that you want to test, and it generates reports using the aXe engine.
Continuum can be used for your use case where it seems the accessibility plugin that comes with Protractor cannot. Here's some documentation on a Protractor-based sample project that uses Continuum. It can be downloaded from webaccessibility.com under 'Continuum for Protractor'. If you look at the source code of the sample project, it basically just boils down to this:
const continuum = require('../js/Continuum.js').Continuum;
continuum.setUp(driver, "../js/AccessEngine.community.js");
continuum.runAllTests().then(() => {
const accessibilityConcerns = continuum.getAccessibilityConcerns();
// accessibilityConcerns.length will be 0 if no accessibility concerns are found
});
(For more information on the above, you can check out the API documentation.)
You can execute that continuum.runAllTests bit wherever in your tests that you like. That includes multiple times from within the same test too, if desired, which if I understand correctly is ultimately what you're after.
Of course, no automated accessibility testing tool is a replacement for manual accessibility testing. It seems like you're just looking to get a baseline level of compliance right now though, so Continuum seems appropriate for your use case to tackle the low-hanging fruit.

How to optimize "Parse HTML" events?

While profiling my webapp I noticed that my server is lighting fast and Chrome seems to be the bottleneck. I fired up Chrome's "timeline" developer tool and got the following numbers:
Total time: 523ms
Scripting: 369ms (70%)
I also ran a few console.log(performance.now()) from the main Javascript file and the load time is actually closer to 700ms. This is pretty shocking for what I am rendering (an empty table and 2 buttons).
I continued my investigation by drilling into "Scripting":
Evaluating jQuery-min.js: 33ms
Evaluating jQuery-UI-min.js: 50ms
Evaluating raphael-min.js: 29ms
Evaluating content.js: 41ms
Evaluating jQuery.js: 12ms
Evaluating content.js: 19ms
GC Event: 63 ms
(I didn't list the smaller scripts but they accounted for the remaining time) I don't know what to make of this.
Are these numbers normal?
Where do I go from here? Are there other tools I should be running?
How do I optimize Parse HTML events?
For all the cynicism this question received, I am amused to discover they were all wrong.
I found Chrome's profiler output hard to interpret so I turned to console.log(performance.now()). This led me to discover that the page was taking 1400 ms to load the Javascript files, before I even invoke a single line of code!
This didn't make much sense, so revisited Chrome's Javascript profiler tool. The default sorting order Heavy (Bottom Up) didn't reveal anything meaningful, so I switched over to Chart mode. This revealed that many browser plugins were being loaded, and they were taking much longer to run than I had anticipated. So I disabled all plugins and reloaded the page. Guess what? The load time went down to 147ms.
That's right: browser plugins were responsible for 90% of the load time!
So to conclude:
JQuery is substantially slower than native APIs, but this might be irrelevant in the grand scheme of things. This is why good programmers use profilers to find bottlenecks, as opposed to optimizing blindly. Don't trust people's subjective bias or a "gut feeling". Had I followed people's advise to optimize away JQuery it wouldn't have made a noticeable difference (I would have saved 100ms).
The timeline tool doesn't report the correct total time. Skip the pretty graphs and use the following tools...
Start simple. Use console.log(performance.now()) to verify basic assumptions.
Chrome's Javascript profiler
Chart will give you a chronological overview of the Javascript execution.
Tree (Top Down) will allow you to drill into methods, one level at a time.
Turn off all browser plugins, restart the browser, and try again. You'd be surprised how much overhead some plugins contribute!
I hope this helps others.
PS: There is a nice article at http://www.sitepoint.com/jquery-vs-raw-javascript-1-dom-forms/ which helps if you want to replace jQuery with the native APIs.
I think Parse HTML events happen every time you modify the inner HTML of an element, e.g.
$("#someiD").html(text);
A common style is to repeatedly append elements:
$.each(something, function() {
$("#someTable").append("<tr>...</tr>");
});
This will parse the HTML for each row that's added. You can optimize this with:
var tablebody = '';
$.each(something, function() {
tablebody += "<tr>...</tr>";
});
$("#someTable").html(tablebody);
Now it parses the entire thing at once, instead of repeatedly parsing it.

tracking a javascript found in pagesource

ive tried everything i cud to figure this out, but i cannot track a piece of javascript in a webpage
so, just to give you some context even though my problem is not related to just this scenario. it depends on a much bigger spectrum.
Anyway, im developing on sugarCRM and im trying to edit the default onclick behavior of a slot in calendar module (you dont need to understand this to help me, so please keep reading). when i click on a slot, a modal dialog window opens that lets me log a meeting or a call.
So i tracked down the javascript behind this. ive used firebug and chrome, and they both give a list of all the JS files that are being used on a given webpage
for example i search for "SUGAR.collection" and firebug tells me its located in a file named "sugar_field_grp.js?v=FVh1Z-v5nA6bYov7-aFFqQ" i can see this piece of code resides in sugar_field_grp.js,
but the code im trying to change resides in "index.php?module=Calendar&action=index&parentTab=Activities", firebug actually tells me this is the file that has the javascript i want to change.
I can also right click view page source and i can see that piece of code inside the script tag. so considering this piece of code doesnt reside in a JS file, i cannot change it, its generated at runtime (i think) but there must be some source, there must be a file thats telling sugarCRM to generate this code
tl;dr how to track down a piece of javascript code that resides on pagesource and theres no JS file specified by firebug or chrome save for index.php (this file doesnt have that javascript either)
i know its been a long post
thanks for reading
Learn how to search for strings in files on disk on your machine.
On Linux, MacOS and most unixen the go-to tool for this is grep. This applies to any programming language you work with. For your case simply cd into the directory of your source code and do:
grep -r SUGAR.collection .
If you're using git as your source control tool then git grep is much faster.
On Windows there are various GUI tools you can use to search for text in files. Just google: grep for windows.
If you're using an IDE then just your IDE's find-in-files functionality.
To track down specific code using Chrome / Webkit go through the following two steps:
Client:
1. Search all static text sources
Open the Dev Panel using CTRL + SHIFT + I
Hit CTRL + SHIFT + F for a global search dialog to pop up
Right next to it you can set pretty printing of the JS code to on: button { }
Enter your search term or terms using regular expressions
Optional: Decide if you need a case insensitive search which has a greater searchspace and takes longer
Example:
2. Search the dynamic user-DOM contents
Go to the Tab 'Elements' hit CTRL + F.
Enter your search term (This will also search iframes, svg's etc... within the parent DOM)
3. Recommended:
Cross-reference the results of step 1. and step 2.
If a given string is present in both the DOM and the static sources, then you can assume that the content is not programmatically created on the client-side.
Server:
Many projects perform a media bundling step prior to content-delivery. They pack web-resources into the main file (e.g. index.php) to save HTTP roundtrips.
Use sourcemaps / and or search the entire codebase for a salient static string or a salient keyword near the static string to locate the original source files.
Searching files:
Locally, I generally use the rapid index, and heuristic search of JetBrain's IDE's (IDEA, PHPStorm,...) and Sublime. The grep-command tool can definitely not compete here in terms of performance. On Windows I additionally use Totalcommander and its archive/regex finding abilities.
When quickly looking up code on the server you may use something like:
grep -r -C10 --color=always 'keyword1|keyword2' htdocs/ | less -R
which will also provide you with line-context. two caveats: you may want to filter out binaries first and symlinks outside the scope will be ignored.

What software/webapp can I use to edit HTML pages?

Ok, my question's not as broad as it seems, to summarize 8 months effort on my part:
I create chunks of re-usable, extensible XHTML which degrades gracefully and is all kinds of awesome. Most of my code chunks are provided a Javascript interaction layer, and are styled with CSS. I set to work pulling my code chunks into Dreamweaver as 'Snippets' but they're unintelligent chunks of text. Also, once inserted, my beautiful code chunks get mangled by the non-techies who are the ones actually using Dreamweaver.
Also, because they're unintelligent snippets, I have a line of Javascript which configures the code chunks when initialised - see this post for further detail on my approach. But currently I have to replicate a single code chunk as many times as there are configuration options (so each 'snippet' may only differ from another of the same type by ONE config value). This is incredibly lame, it works, but its lame and time-consuming for me to re-deploy a bunch of snippets and hard for my team to remember all the variations.
So I have a series of requirements, to my mind, as the most likely things to solve in any system I put my chunks into:
The inserted code is not modified at insertion time, by the system
The code to be inserted needs to allow config options
I'd be overjoyed if, once inserted, the only editable parts are text nodes
Copy and pasting these whole objects
A clean interface from which to choose from my range of code chunks
It's a serious list of requirements I presume, much searching led me to Kompoze and its 'Smart widgets' which, according to a random post from 2004, suggests XUL files can be created and extensions can be made which sounds vaguely like what I want. The text editor itself was less prone to destruction, when compared to Dreamweaver.
So yeah, I've chased too many rabbits on this one, keen as for a solution whether Software+extension, or Webapp.
EDIT:
Btw, it did occur to me to investigate a highly customised TinyMCE instance, but I don't know feasible that is, and unless there's some sweet backend available, I'm stuck with local editing of files for now - not even on a web server...
To my mind the best answer to this question will solve most of the above, and provide some general workflow advice alongside the suggestion(s).
I would go with a solution based around the excellent markItUp! editor. It's very simple to extend it to cope with the requirements you have. You can add sophisticated logic, and it's nice and shiny.
I'd probably combine it with Jeditable for the inline node editing, and build the whole thing on top of Django, for ease and convenience. Completely customisable, surprisingly easy to work with, portable and cross-platform, and easy to set-up for off-line use. Oh, and all free and open-source.
What do you think of this approach:
<div class="thing">
<elements... />
<script type="text/javascript">
document.write('<span id="thing' + thingNo + '"></span>')
new Thing().init({ id:'thing'+thingNo; });
thingNo += 1;
</script>
</div>
Of course, you'll have to change Thing().init so that it would initialize the parent (instead of current) node.
Have you considered server-side includes where the directive is either a generated page or a shell command? E.g.:
<!--#include virtual="./activePage.aspx?withParam1=something&param2=somethingelse" -->
or
<!--#exec cmd="shellCommand -withParams" -->
You can reuse the same page or command, and provide parameters specific to each usage in each XHTML page.

Categories

Resources