Is it expensive to read properties from window object? - javascript

For example, let's say we have two versions of lazyload (see code below). In terms of performance, is versionII better than version I?
const imgs = document.querySelectorAll('img');
window.addEventListener('scroll' , lazyload);
// version I
function lazyload() {
imgs.forEach((img) => {
if (img.offsetTop < window.innerHeight + window.pageYOffset) {
img.src = img.dataset.src;
}
}
}
// version II
function lazyload() {
const innerHeight = window.innerHeight;
const pageYOffset = window.pageYOffset;
imgs.forEach((img) => {
if (img.offsetTop < innerHeight + pageYOffset) {
img.src = img.dataset.src;
}
}

Your specific question:
I'll rephrase your specific question like this:
Is it costly to access window.innerHeight and/or window.pageYOffset?
It can be. According to Paul Irish of the Google Chrome Developer Tooling team:
All of the below properties or methods, when requested/called in JavaScript, will trigger the browser to synchronously calculate the style and layout*. This is also called reflow or layout thrashing, and is common performance bottleneck.
...
window
window
window.scrollX, window.scrollY
window.innerHeight, window.innerWidth
window.getMatchedCSSRules() only forces style
-- What forces layout / reflow (emphasis mine)
At the bottom of that document, Paul indicates the layout reflow will only occur under certain circumstances. The portions below (with my added emphasis) answer your question better and more authoritatively than I could.
Reflow only has a cost if the document has changed and invalidated the style or layout. Typically, this is because the DOM was changed (classes modified, nodes added/removed, even adding a psuedo-class like :focus).
If layout is forced, style must be recalculated first. So forced layout triggers both operations. Their costs are very dependent on the content/situation, but typically both operations are similar in cost.
What should you do about all this? Well, the More on forced layout section below covers everything in more detail, but the
short version is:
for loops that force layout & change the DOM are the worst, avoid them.
Use DevTools Timeline to see where this happens. You may be surprised to see how often your app code and library code hits this.
Batch your writes & reads to the DOM (via FastDOM or a virtual DOM implementation). Read your metrics at the begininng of the frame (very very start of rAF, scroll handler, etc), when the numbers are still identical to the last time layout was done.
Changing the src attribute is probably sufficient to "invalidate the style or layout." (Although I suspect using something like correctly-dimensioned SVG placeholders for lazy-loaded images would mitigate or eliminate the cost of the reflows.)
In short, your "version I" implementation is preferable and has, as far as I can tell, no real disadvantages.
Your general question
As shown above, reading properties from the window object can be expensive. But others are right to point out a couple things:
Optimizing too early or too aggressively can cost you valuable time, energy, and (depending on your solution) maintainability.
The only way to be certain is to test. Try both versions of your code, and carefully analyze your favorite dev tool's output for performance differences.

This one seems better
// version III
function lazyload(){
const dimension = window.innerHeight + window.pageYOffset;
img.array.forEach(img => {
if (img.offsetTop < dimension) {
img.src = img.dataset.src;
}
});
}

Related

Performance of CSS Flex vs. manual JavaScript computation

I have a browser-based system which consists of, among other modular components, an <iframe> container which is nested with other <iframe> for - currently - up to three levels. A given webpage may be embedded within multiple nested frames simultaneously. The end-users' screen resolutions and the nested frames' sizes can vary.
It is therefore important for element sizes, paddings, margins etc. to be defined in relative terms. To this end, I have identified two approaches: Either I use CSS Flex wherever possible and compute with JavaScript manually for the rest, or do the reverse and compute wherever possible. Here's an example of the computation-focused approach for one of my more complex pages to be embedded in the frames:
// Tile size-dependent CSS
const RATIO = 0.618;
// Amount of space to use in view
var viewHeight = window.innerHeight;
var viewWidth = window.innerWidth;
var viewVertSpace = viewHeight * 0.8;
var viewHoriSpace = viewWidth * 0.8;
// Position and sizing for each overall column
var colWidth = Math.round(viewHoriSpace * 0.5);
var colSpace = Math.round(viewVertSpace) - 2; // Deduct 2px bottom border
// Sizing of column 1 elements
var summaryHeight = colSpace * 0.5;
var mainRowHeight = summaryHeight * RATIO;
var mainRowSize = Math.round(mainRowHeight - 10); // Deduct 5px vertical padding per side
var subTextSize = Math.round((summaryHeight - mainRowHeight) * (1 - RATIO));
var diffIconSize = Math.round((mainRowSize - subTextSize) * RATIO);
// Sizing of column 2 elements
var horiSpace = colWidth * RATIO; // Leave some space on both sides
var chartWidth = horiSpace - (horiSpace * RATIO);
var innerBarWidth = chartWidth * (1 - RATIO);
var targetArrowWidth = subTextSize * 0.5;
There is a performance constraint on the system's loading time, one which has been failed during the first deployment to the test server. I have been continuously optimising the code (part of which involved implementing lazy initialisation and ordered loading to prevent too many simultaneous HTTP calls) and this is one area I'm looking at. I have read that extensive use of CSS Flex in more complex applications can have a significant performance impact but I wonder if relying on manual computation via JavaScript to set absolute pixel sizes is actually better?
While specific implementations may vary, here are some general things to consider:
You will not be able to control when the CSS causes your elements to resize, with JavaScript, you can make some decisions such as setting timeouts or establishing minimum values to trigger a change. However, any such solutions will be blocking any other JavaScript you may wish to be running in the same time frame. Similarly, any other JavaScript you have running will block this code. Using CSS Flexbox will require you to check on which browser-specific implementation details apply to your use cases (the same is of course true in your JavaScript).
In my experience, CSS flexbox has been faster than any JavaScript solutions that attempt to address the same concerns, I cannot guarantee that this is a universal truth though.
You should also consider code maintenance when implementing a solution. If your JavaScript is full of magic numbers and strange conditionals, it might be easier to maintain a CSS solution (assuming you do not fill it with magic numbers and strange conditionals as well, which I find easier to avoid with a Flexbox).
I'm sorry I can't give you a "use this every time answer", but hopefully this will help you make good decisions given the constrains that exist

How to make the fastest possible bottom-up tree transformer in JavaScript? Should I manage memory on my own?

I am implementing a bottom-up tree transformer in JavaScript. It will be used for an interpreter for a supercombinator reducer, so this algorithm has to be as fast as possible, as it affects every program built on top of it. This is my current implementation:
function transform(tree,fn){
var root = tree,
node = tree,
child,
parent,
is_node,
dir;
root.dir = 0;
while(true) {
is_node = typeof(node)==="object";
dir = is_node ? node.dir : 2;
if (dir < 2)
child = node[dir],
node.dir++,
child.parent = parent = node,
child.dir = 0,
node = child;
else if ((changed = fn(node))!==undefined)
changed.parent = parent,
changed.dir = 0,
node = changed;
else
if (!parent)
return node;
else
parent[parent.dir-1] = node,
node = parent,
parent = node.parent;
};
};
// TEST
var tree = [[1,2],[[3,4],[5,6]]];
console.log(
JSON.stringify(transform(tree,function(a){
if (a[0]===1) return [3,[5,5]];
if (a[0]===5) return 77;
})) === "[[3,77],[[3,4],77]]");
This is obviously far from optimal. How do I make the transformer the fastest possible? Maybe instead of looking for a faster algorithm, I could manage the memory by myself, and use asm.js.
You have a couple of options, going from easiest-but-slowest to fastest-but-trickiest.
Using regular JavaScript
This is pretty much what you are doing at this point. Looking at your algorithm, I don't see anything that can really be suggested that would show anything more than an insignificant increase in speed.
Using asm.js
Using asm.js might be an option for you. This would offer a speed increase. You don't go in to a lot of details of where this system will be used, but if it works, it shouldn't be terribly difficult to implement something like this. You would likely see performance increases, but depending on how you are planning to use this, it might not be as substantial as you would like (for something like this, you'd probably see somewhere between 50%-500% increase in speed, depending how efficient the code is).
Build it in a different, compiled, typed language.
If speed is really at a premium, depending on your use case, it might be best to write this program (or at least this function) in a different language which is compiled. You could then run this compiled script on the server and communicate with it via web services.
If the number of times you need to transform the tree in a short amount of times is huge, it won't be much of a boost because of the time it would take to send and receive the data. However, if you are just doing relatively few but long-running tree transformation, you could see a huge benefit in performance. A compiled, typed language (C++, Java, etc) will always have better performance than an interpreted, typeless language like JavaScript.
The other benefit of running it on a server is you can generally throw a lot more horsepower at it, since you could write it to be multi-threaded and even run on a cluster of machines instead of just one (for a high-end build). With JavaScript, you are limited to generally one thread and also by the end-users computer.

Most efficient way to throttle continuous JavaScript execution on a web page

I'd like to continuously execute a piece of JavaScript code on a page, spending all available CPU time I can for it, but allowing browser to be functional and responsive at the same time.
If I just run my code continuously, it freezes the browser's UI and browser starts to complain. Right now I pass a zero timeout to setTimeout, which then does a small chunk of work and loops back to setTimeout. This works, but does not seem to utilize all available CPU. Any better ways of doing this you might think of?
Update: To be more specific, the code in question is rendering frames on canvas continuously. The unit of work here is one frame. We aim for the maximum possible frame rate.
Probably what you want is to centralize everything that happens on the page and use requestAnimationFrame to do all your drawing. So basically you would have a function/class that looks something like this (you'll have to forgive some style/syntax errors I'm used to Mootools classes, just take this as an outline)
var Main = function(){
this.queue = [];
this.actions = {};
requestAnimationFrame(this.loop)
}
Main.prototype.loop = function(){
while (this.queue.length){
var action = this.queue.pop();
this.executeAction(e);
}
//do you rendering here
requestAnimationFrame(this.loop);
}
Main.prototype.addToQueue = function(e){
this.queue.push(e);
}
Main.prototype.addAction = function(target, event, callback){
if (this.actions[target] === void 0) this.actions[target] = {};
if (this.actions[target][event] === void 0) this.actions[target][event] = [];
this.actions[target][event].push(callback);
}
Main.prototype.executeAction = function(e){
if (this.actions[e.target]!==void 0 && this.actions[e.target][e.type]!==void 0){
for (var i=0; i<this.actions[e.target][e.type].length; i++){
this.actions[e.target][e.type](e);
}
}
}
So basically you'd use this class to handle everything that happens on the page. Every event handler would be onclick='Main.addToQueue(event)' or however you want to add your events to your page, you just point them to adding the event to the cue, and just use Main.addAction to direct those events to whatever you want them to do. This way every user action gets executed as soon as your canvas is finished redrawing and before it gets redrawn again. So long as your canvas renders at a decent framerate your app should remain responsive.
EDIT: forgot the "this" in requestAnimationFrame(this.loop)
web workers are something to try
https://developer.mozilla.org/en-US/docs/DOM/Using_web_workers
You can tune your performance by changing the amount of work you do per invocation. In your question you say you do a "small chunk of work". Establish a parameter which controls the amount of work being done and try various values.
You might also try to set the timeout before you do the processing. That way the time spent processing should count towards any minimum the browsers set.
One technique I use is to have a counter in my processing loop counting iterations. Then set up an interval of, say one second, in that function, display the counter and clear it to zero. This provides a rough performance value with which to measure the effects of changes you make.
In general this is likely to be very dependent on specific browsers, even versions of browsers. With tunable parameters and performance measurements you could implement a feedback loop to optimize in real-time.
One can use window.postMessage() to overcome the limitation on the minimum amount of time setTimeout enforces. See this article for details. A demo is available here.

Faster SVG Path manipulation

So I want to make a drawing tool using SVG, I'm using a rather naive approach to change the d attribute of my Path:
$("div#drawarea").bind("mousemove", function(ev) {
ev.preventDefault();
ev.stopPropagation();
var pX= (ev.pageX - this.offsetLeft);
var pY= (ev.pageY - this.offsetTop);
$path.attr("d", $path.attr("d") + " L" +pX+ "," + pY); //using jquery-svg here to change the d attribute
});
As you can see I do this on the mousemove function. The code works but it becomes unresponsive when the mouse is moving fast creating numerous straight lines when I actually want it to be smooth lines. I think this is happening because the numerous string concatenations I'm doing on the mousemove event (the d attribute on the path can become quite big when the click has been held for long, thousands of characters long in fact).
I'm wondering if there is any native way to add new values at the end of a path instead of manipulating the d attribute directly. I checked the jquery-svg sourcecode and it seems that the library also uses the naive string concatenation mode internally so using its methods would not wield any benefit.
Also I'm wondering if this is the case or if the browser just limits the amount of mousemove events (once every X milliseconds?) that can be triggered and so no performance optimizations would improve this.
Use the SVG pathseg DOM methods. You have to write more complicated code but the browser doesn't have to reparse the whole path attribute. Firefox for instance does take advantage of this and it's quite likely other broswers also.
In case someone else stumbeld upon the quesion of what is the fastes way to update an SVG-Path data attribute (for realtime applications), I run a small test on that:
http://jsperf.com/svg-path-test
Yes, setting it as string means that it needs to be parsed, which isn't the case for the DOM SVG interface but the first method is still much faster. Maybee the interface updates the DOM with each point added, slowing down the whole process.

Optimal pixel drawing speed?

I'm using the Canvas object with javascript. Just doing some tests to see how fast I can set pixels in a draw loop.
On mac, it works great in FF, safari, chrome. On windows, I get a flickering effect on FF and chrome. It looks like somehow the canvas implementation on windows is different than on mac for the different browsers? (not sure if that's true).
This is the basic code I'm using to do the drawing (taken from the article below - I've optimized the below to tighten the draw loop, it runs pretty smooth now):
var canvas = document.getElementById('myCanvasElt');
var ctx = canvas.getContext('2d');
var canvasData = ctx.getImageData(0, 0, canvas.width, canvas.height);
for (var x = 0; x < canvasData.width; x++) {
for (var y = 0; y < canvasData.height; y++) {
// Index of the pixel in the array
var idx = (x + y * canvas.width) * 4;
canvasData.data[idx + 0] = 0;
canvasData.data[idx + 1] = 255;
canvasData.data[idx + 2] = 0;
canvasData.data[idx + 3] = 255;
}
}
ctx.putImageData(canvasData, 0, 0);
again, browers on windows will flicker a bit. It looks like the canvas implementation is trying to clear the canvas to white before the next drawing operation takes place (this does not happen on mac). I'm wondering if there is a setting I can change in the Canvas object to modify that value (double-buffering, clear before draw, etc)?
This is the article I am using as reference:
http://hacks.mozilla.org/2009/06/pushing-pixels-with-canvas/
Thanks
I think it's fairly clear that browsers who implement the Canvas object use DIBS (device independent bitmaps). The fact that you have access to the pixelbuffer without having to lock the handle first is proof of this. And Direct2D has nothing to do with JS in a browser thats for sure. GDI is different since it uses DDBs (device dependent bitmaps, i.e allocated from video memory rather than conventional ram). All of this however has nothing to do with optimal JS rendering speed. I think writing the RGBA values as you do is probably the best way.
The crucial factor in the code above is the call to putImageData(). This is where browsers can differ in their implementation. Are you in fact writing directly to the DIB, and putImageData is simply a wrapper around InvalidateRect? Or are you in fact writing to a duplicate in memory, which in turn is copied into the canvas device context? If you use linux or mac then this is still a valid question. Although device contexts etc. are typically "windows" terms, most OS'es deal with handles or structures in pretty much the same way. But once again, we are at the mercy of the browser vendor.
I think the following can be said:
If you are drawing many pixels in one go, then writing directly to the pixelbuffer as you do is probably the best. It is faster to "bitblt" (copy) the pixelbuffer in one go after X number of operations. The reason for this is that the native graphics functions like FillRect also calls "invalidate rectangle" which tells the system that a portion if the screen needs a re-draw (refresh). So if you call 100 line commands, then 100 update's will be issued - slowing down the process. Unless (and this is the catch) you use the beginPath/EndPath methods as they should be used. Then it's a whole different ballgame.
It's here that the Begin/End path "system" comes into play, and also the Stroke/Outline commands. They allow you to execute X number of drawing operations within a single update. But a lot of people get this wrong and issue a redraw for each call to line/fillrect etc.
Also, have you tried creating an invisible canvas object, drawing to that, and then copying to a visible canvas? This could be faster (proper double-buffering).
The problem is with the way the browsers use the native graphics APIs on the different OSes. And even on the same OS, using different APIs (for example GDI vs. Direct2D in Windows) would also produce different results.

Categories

Resources