How to optimize "Parse HTML" events? - javascript

While profiling my webapp I noticed that my server is lighting fast and Chrome seems to be the bottleneck. I fired up Chrome's "timeline" developer tool and got the following numbers:
Total time: 523ms
Scripting: 369ms (70%)
I also ran a few console.log(performance.now()) from the main Javascript file and the load time is actually closer to 700ms. This is pretty shocking for what I am rendering (an empty table and 2 buttons).
I continued my investigation by drilling into "Scripting":
Evaluating jQuery-min.js: 33ms
Evaluating jQuery-UI-min.js: 50ms
Evaluating raphael-min.js: 29ms
Evaluating content.js: 41ms
Evaluating jQuery.js: 12ms
Evaluating content.js: 19ms
GC Event: 63 ms
(I didn't list the smaller scripts but they accounted for the remaining time) I don't know what to make of this.
Are these numbers normal?
Where do I go from here? Are there other tools I should be running?
How do I optimize Parse HTML events?

For all the cynicism this question received, I am amused to discover they were all wrong.
I found Chrome's profiler output hard to interpret so I turned to console.log(performance.now()). This led me to discover that the page was taking 1400 ms to load the Javascript files, before I even invoke a single line of code!
This didn't make much sense, so revisited Chrome's Javascript profiler tool. The default sorting order Heavy (Bottom Up) didn't reveal anything meaningful, so I switched over to Chart mode. This revealed that many browser plugins were being loaded, and they were taking much longer to run than I had anticipated. So I disabled all plugins and reloaded the page. Guess what? The load time went down to 147ms.
That's right: browser plugins were responsible for 90% of the load time!
So to conclude:
JQuery is substantially slower than native APIs, but this might be irrelevant in the grand scheme of things. This is why good programmers use profilers to find bottlenecks, as opposed to optimizing blindly. Don't trust people's subjective bias or a "gut feeling". Had I followed people's advise to optimize away JQuery it wouldn't have made a noticeable difference (I would have saved 100ms).
The timeline tool doesn't report the correct total time. Skip the pretty graphs and use the following tools...
Start simple. Use console.log(performance.now()) to verify basic assumptions.
Chrome's Javascript profiler
Chart will give you a chronological overview of the Javascript execution.
Tree (Top Down) will allow you to drill into methods, one level at a time.
Turn off all browser plugins, restart the browser, and try again. You'd be surprised how much overhead some plugins contribute!
I hope this helps others.
PS: There is a nice article at http://www.sitepoint.com/jquery-vs-raw-javascript-1-dom-forms/ which helps if you want to replace jQuery with the native APIs.

I think Parse HTML events happen every time you modify the inner HTML of an element, e.g.
$("#someiD").html(text);
A common style is to repeatedly append elements:
$.each(something, function() {
$("#someTable").append("<tr>...</tr>");
});
This will parse the HTML for each row that's added. You can optimize this with:
var tablebody = '';
$.each(something, function() {
tablebody += "<tr>...</tr>";
});
$("#someTable").html(tablebody);
Now it parses the entire thing at once, instead of repeatedly parsing it.

Related

Why google is using the term "Render-Blocking JavaScript"?

See:
https://developers.google.com/speed/docs/insights/BlockingJS
Google is talking there about "Render-Blocking JavaScript", but in my opinion that term is incorrect, confusing and misleading. It almost looks like "Google" is also not understanding it?
This point is that Javascript execution is always pausing / blocking rendering AND also always pausing / blocking the "HTML parser" (at least in Chrome and Firefox). It's even blocking it in case of an external js file, in combination with an async script tag!
So talking about removing "render-blocking Javascript" by for example using async, implies that there is also non blocking Javascript or that "async Javascript execution" is not blocking rendering, but that's not true!
The correct term would be: "Render-Blocking Download(s)". With async you will avoid that: downloading the js file, will not pause / block rendering. But the execution will still block rendering.
One more example which confirms it looks like Google is not "understanding" it.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>Test</title>
</head>
<body>
Some HTML line and this is above the fold
<script>
// Synchronous delay of 5 seconds
var timeWhile = new Date().getTime();
while( new Date().getTime() - timeWhile < 5000 );
</script>
</body>
</html>
I tested it in Firefox and Chrome and they are showing (rendering): "Some HTML line and this is above the fold" after 5 seconds and not within 5 seconds!!!! It looks like Google is thinking that in a case like that, the Javascript will not block rendering, but as expected from theory, it will block. Before the js execution will start, all the html is already in the DOM (execept end body / html tag), but rendering is not done yet and will be paused. So if Google would be really aware of this, then Chrome would first finish rendering before starting with the execution of javascript.
If you take the example above and you're using:
<script src="delay.js" async></script>
or
<script src="delay.js"></script>
instead of internal javascript. Then it can also give the same results as the example above. For example:
If the preloader (scanning for files to already download) already would have downloaded "delay.js", before the "HTML parser" is coming at the Javascript part.
Usually external files from Google, Facebook et cetera are already stored in the cache, so there is no download and they just take the file from cache.
In cases like that (and also with async), the result will be the same as the example above (at least in pretty a lot of cases). Because if there is no extra download time, the "Javascript execution" will / can already start, before the preceding html finished rendering.
So in a case like that you could even consider to put "no-cache" / "no-store" on delay.js (or even extra delay), to make your page render more fast. By forcing a download (or extra delay) you will give the browser some extra time to finish rendering of the preceding html, before executing the render blocking Javascript.
So i really don't understand why Google (and others) are using the term "Render-Blocking JavaScript", while from theory and from "real life" examples it looks like it's the wrong term and wrong thinking. I see noone talking about this on the internet, so i don't understand. I know i am f**king intelligent (j/k), but it looks kind of weird to me, to be the only one with the thoughts above.
I work with developers on Chrome, Firefox, Safari and Edge, and I can assure the people working on these aspects of the browser understand the difference between async/defer and neither. You might find others will react more politely to your questions if you ask them politely.
Here's an image from the HTML spec on script loading and execution:
This shows that the blocking happens during fetch if a classic script has neither async or defer. It also shows that execution will always block parsing, or certainly the observable effects of parsing. This is because the DOM and JS run on the same thread.
I tested it in Firefox and Chrome and they are showing (rendering): "Some HTML line and this is above the fold" after 5 seconds and not within 5 seconds!!!!
Browsers may render the line above, but nothing below. Whether the above line renders depends on the timing of the event loop in regards to the screen refresh.
It looks like Google is thinking that in a case like that, the Javascript will not block rendering
I'm struggling to find a reference to this. You linked to my article in an email you sent me, which specifically talks about rendering being blocked during fetching.
In cases like that (and also with async), the result will be the same
That isn't guaranteed by the spec. You're relying on retrieval from the cache being instant, which may not be the case.
in a case like that you could even consider to put "no-cache" / "no-store" on delay.js (or even extra delay), to make your page render more fast. By forcing a download (or extra delay) you will give the browser some extra time to finish rendering of the preceding html, before executing the render blocking Javascript.
Why not use defer in this case? It achieves the same without the bandwidth penalty and unpredictability.
Maarten B, I did test you code and you are correct indeed. Whether you use async, defer or whatever, the lines above the inline JavaScript are not beeing rendered. The information in the documentation of Google is therefore incorrect.

Report all used Javascript functions

I am working on a WYSIWIG animation editor for designing sliders / ad banners that includes a lot of dependencies, which also means a lot of extra bloated code that isn't ever used. I am hoping to run a report on the code that helps me identify the important things. I have a couple cool starts that will search through javascript for all functions and return each function by parts:
https://regex101.com/r/sXrHLI/1
Then some PHP that will sort it by size:
Sort preg_match_all by named group size
The thought is that by identifying large functions that aren't being used, we can remove them. My next step is to identify the function tree of what functions are invoked on document load, and then which are loaded and invoked on actions such as clicks / mouseovers and so on.
While I have this handy function that tells me all functions loaded in the DOM, it isn't enough:
var functionArray;
$(document).ready(function(){
var objs = [];
for (var obj in window){
if(window.hasOwnProperty(obj) && typeof window[obj] === 'function') objs.push(obj);
};
console.log(obj));
});
I am looking for a solution that I can use to script in PHP / shell to emulate page load - now here is where my knowledge of terminology fails me, am I looking for "Call Stack", do I need a timeline, interpreter, framework, engine or a parser?
I next need to emulate a click / hover event on all elements, or all elements that match something like this regex:
(?|\$\(['"](\.\w*)["']|getElementsByClassName\('(\w*)'\))
(?|\$\(['"](\#\w*)["']|getElementsById\('(\w*)'\))
to find any events that trigger functions so I can make a master list of functions that need to be in the final code.
I was watching a talk from a Google Developer and I thought of your post. The following link has more intel on Dev Tools Coverage Profiler, but the following is the high level.
Google dev tools ships out a neat feature for generating reports on used and unused JS and CSS code -- which is right along the essence of what you were searching to do (just a slightly different medium -- it'd be a bit harder to automate, but it otherwise contains, I believe, exactly what you were looking for).
Open Dev tools and then open up the ellipse in the bottom left corner (see image) and then click the record button [see image 1]. Go through the steps you want to capture. You'll get an interactive screen to which you can go through all the code and see what was used (green) and what was not used (red) [see image 2]
Image 1 - Ellipse drop down to get to coverage tool
Image 2 - Full screenshot of the interactive report for this StackOverflow page while editing this post.
I'd suggest you to take a look at this tool:
Istanbul
With it you can do the following:
create an instrumented version of your code
deploy it on the server and run the manual test (coverage information is collected in one of the global variables inside the browser)
copy coverage information into a file (its an lcov data as far as I remember)
generate code coverage report with it
If you feel like going further, you can actually use something like jvm-cucumber with selenium to automate UI tests. You will need to dump the coverage data every time you reload the page, however. Then you will have to combine coverage from different runs and use this combined lcov data to generate the overall report.
The whole process is a bit heavy, but it is way better then starting to reinvent it.
Even more, you can combine this data with unit test coverage information to produce joint reports.
As a step further, you may want to setup sonar server so that you store multiple versions of the coverage reports and compare differences between tests.

locate corresponding JS source of code which is not optimized by V8

I try to optimize the performance of a node.js application and therefore I am analyzing the behavior of V8's JIT compiler.
When running the application via node --trace_deopt --trace_opt --code_comments --print_optcode ..., the output contains many recurring lines like the following:
[didn't find optimized code in optimized code map for 0x490a8b4aa69 <SharedFunctionInfo>]
How can I find out which javascript code corresponds to 0x490a8b4aa69?
The full output is available here.
That error message used to be around line 10200 of v8/src/objects.cc, but is no more. It basically means no optimization was currently employed for a particular trace. Possibly because it was unused, or used sufficiently infrequently. It may have likely been a Node.js library function. The address provided is in memory. You'd have to have attached a debugger to v8 and load the symbol for the SharedFunctionInfo at that location. Possibly breakpoint on the line that produces the message too.
I don't think it is that useful to know what was not optimized, as there are lots of things that don't get optimized... just take the output from --trace_opt and assume everything else isn't. It was kind of just a hint that a check was performed for optimized code, and none was there are the time. Maybe try --trace_codegen and work backwards.
This looks to be a very time consuming thing to research.
Thorsten Lorenz would be the guy to ask about this.

Cannot instantiate Rhino with math.js in another thread then UI on Android

I'm trying to use some JS script which has math.js included into it and some custom functions. Whole script takes around 1.5mb and it takes about 15 seconds to evaluate it using evaluateReader method of Rhino - though it's not possible to run it on UI thread. So we wrapped it into AsyncTask. Unfortunatelly running it from other thread then UI is making app to crash:
Caused by: org.mozilla.javascript.EvaluatorException: Too deep recursion while parsing (JavaScript#1133)
Line 1133 of our script is a part of math.js in Signature.prototype.expand:
for (i = 0; i < param.types.length; i++) {
recurse(signature, path.concat(new Param(param.types[i])));
}
I suppose that it can be somehow related to memory issues, as adding largeHeap to manifest helps, but only on some newer devices which allow it.
Is there any other solution that we can use that script and load it without blocking UI?
--- update 30.06
I've investigated memory management and it looks like this is some issue with memory. Rhino need additionall 25-30mb to get that script evaluated...
This answer is the closest I've found to a solution. Unfortunately, the documentation states that devices are not required to honor the stack size parameter. So it doesn't seem that there's a way to guarantee you can provide sufficient memory to avoid the "too deep recursion" error, which is a stack overflow error.

Javascript debugging

I have recently started to tinker with Project Euler problems and I try to solve them in Javascript. Doing this I tend to produce many endless loops, and now I'm wondering if there is any better way to terminate the script than killing the tab in Firefox or Chrome?
Also, is firebug still considered the "best" debugger (myself I can't see much difference between firebug and web dev tool in safari/chrome ).
Any how have a nice Sunday!
Firebug is still my personal tool of choice.
As for a way of killing your endless loops. Some browsers will prevent this from happening altogether. However, I still prefer just going ctrl + w, but this still closes the tab.
Some of the other alternatives you can look into:
Opera : Dragonfly
Safari / Chrome : Web Inspector
Although, Opera has a nice set of developer tools which I have found pretty useful. (Tools->Advanced->Developer Tools)
If you don't want to put in code to explicitly exit, try using a conditional breakpoint. If you open Firebug's script console and right-click in the gutter next to the code, it will insert a breakpoint and offer you an option to trigger the breakpoint meets some condition. For example, if your code were this:
var intMaxIterations = 10000;
var go = function() {
while(intMaxInterations > 0) {
/*DO SOMETHING*/
intMaxIterations--;
}
};
... you could either wait for all 10,000 iterations of the loop to finish, or you could put a conditional breakpoint somewhere inside the loop and specify the condition intMaxIterations < 9000. This will allow the code inside the loop to run 1000 times (well, actually 1001 times). At that point, if you wish, you can refresh the page.
But once the script goes into an endless loop (either by mistake or design), there's not a lot you can do that I know of to stop it from continuing if you haven't prepared for this. That's usually why when I'm doing anything heavily recursive, I'll place a limit to the number of times a specific block of code can be run. There are lots of ways to do this. If you consider the behaviour to be an actual error, consider throwing it. E.g.
var intMaxIterations = 10000;
var go = function() {
while(true) {
/*DO SOMETHING*/
intMaxIterations--;
if (intMaxIterations < 0) {
throw "Too many iterations. Halting";
}
}
};
Edit:
It just occurred to me that because you are the only person using this script, web workers are the ideal solution.
The basic problem you're seeing is that when JS goes into an endless loop, it blocks the browser, leaving it unresponsive to any events that you would normally use to stop the execution. Web workers are still just as fast, but they leave your browser unburdened and events fire normally. The idea is that you pass off your high-demand tasks (in this case, your Euler problem algorithm) to a web worker JS file, which executes in its own thread and consumes CPU resources only when they are not needed by the main browser. The net result is that your CPU still spikes like it does now, but your browser stays fast and responsive.
It is a bit of a pest setting up a web worker the first time, but in this case you only have to do it once. If your algorithm never returns, just hit a button and kill the worker thread. See Using Web Workers on MDC for more info.
While having Firebug or the webkit debuggers is nice, a browser otherwise seems like overhead for Project Euler stuff. Why not use a runtime like Rhino or V8?

Categories

Resources