Lazy loading occasionally doesn't work in angular - javascript

I am encountering some strange behavior for the following code.
function linkFunc(scope, element, attribute) {
var page = angular.element($window);
page.bind('scroll', function() {
var windowHeight = "innerHeight" in window ? window.innerHeight : document.documentElement.offsetHeight;
var body = document.body, html = document.documentElement;
var docHeight = Math.max(body.scrollHeight, body.offsetHeight, html.clientHeight, html.scrollHeight, html.offsetHeight);
var windowBottom = windowHeight + window.pageYOffset;
if (windowBottom >= docHeight) {
scope.$apply(attribute.myDirective);
}
});
}
The above is a piece of code that detects if the bottom of the page is reached, if its reached it will call whatever function bind to myDirective
The main issue is that most of the time the lazy loading works, and myDirective gets sucessfully called. However some of the times the lazy loading won't work, and I wasn't able to reproduce the bug.
I tried different screen size, different browser, but it seems like the bug just happens randomly.
Maybe someone have this happened to them before, and can point me a direction?
Edit:
More information
I was able to reproduce the bug after a bit of experimenting.
Basically, when the zoom in percentage of the browser is < 100 % , window.pageY returns a decimal value that is slightly inaccurate which cause windowBottom to be off by a 0.1 to 0.9
eg.
console.log(windowBottom); // 1646.7747712336175
console.log(docHeight); // 1647
Does anyone know why this happens?
Edit 2:
The above behavior is also non deterministic, but the decimal part is true.

0.1 + 0.2 !== 0.3
This one is an oddity not just in JavaScript; it’s actually a prevailing problem in computer science, and it affects many languages. The output of this is 0.30000000000000004.
This has to do with an issue called machine precision. When JavaScript tries to execute the line above, it converts the values to their binary equivalents.
This is where the problem starts. 0.1 is not really 0.1 but rather its binary equivalent, which is a near-ish (but not identical) value. In essence, as soon as you write the values, they are doomed to lose their precision. You might have just wanted two simple decimals, but what you get, as Chris Pine notes, is binary floating-point arithmetic. Sort of like wanting your text translated into Russian but getting Belorussian. Similar, but not the same.
You can read more here. Without digging into browser source, I would guess that your problem stems from this.

Given the floating-point precision issues, you may want to loosen you condition to check instead if the two values are less than 1 pixel different. For example:
if (Math.abs(windowBottom - docHeight) < 1) {
scope.$apply(attribute.myDirective);
}

Related

Opera producing fractional values on scrollTop

This is part of a larger program which is handling a scrollbar on a <div> when modifying the height.
When logging the output of various values for moving the scrollbar, there's an issue occurring where values are being produced with decimal places, but only on Opera (version 44.0.2510.1449), and this is only happening on my friend's browser. On my own Opera (version 44.0.2510.1449) I do not encounter the same problem.
Though it's probably irrelevant, the purpose of the code is to find out where the scrollbar is in the div with id #mydiv and do something based on the result.
Similar code with changed variable names:
var myDivHeight = $('#mydiv').height();
$('#mydiv').height(myDivHeight + 50); //10 extra for padding
var scrollTop = $('#mydiv').scrollTop();
var scrollHeight = $('#mydiv').prop('scrollHeight');
console.log(scrollHeight + '-' + scrollTop + '=' + (scrollHeight - scrollTop));
console.log(myDivHeight + 60);
Note: 60 is due to the changes made to the page dynamically, so the div height has been changed. The result of the output should be that scrollHeight - scrollTop = myDivHeight + 60.
Here's my friend's console output on Opera (giving fractional scrollTop):
Here's my console output on Opera:
Here's the console output on Chrome:
Here's the console output from Firefox:
I can't find anyone else reporting this. Has this been reported or seen by anyone else? Is there any way to overcome this?
Thank you.
It turns out that taking the time to ask the question helped me answer it on my own.
First of all, to overcome the problem, it's a case of using Math.round(scrollTop.value).
The specification for scrollTop shows it is a unrestricted double data type so Opera is handling this within spec. Reference: https://drafts.csswg.org/cssom-view/#dom-element-scrolltop
An issue where this came up in jQuery before the type was changed from integer to Number: https://github.com/jquery/api.jquery.com/issues/608

Improving Javascript Mandelbrot Plot Performance

I have written a javascript program which plots the Mandelbrot set with the normal pretty colours approaching the boundary. I was planning on adding a zoom function next but it is far far too slow for this to be sensible.
I have posted the most important portion of the code here:
while (x * x < 2 && y * y < 2 && iteration < max_iteration) {
xtemp = (x * x) - (y * y) + xcord;
y = (2 * x * y) + ycord;
x = xtemp;
iteration = iteration + 1;
}
and have linked to a jsfiddle with the whole page here: http://jsfiddle.net/728dn2m0/
I have a few global variables which I could take into the main loop but that would result in additional calculations for every single pixel. I read on another SO question that an alternative to the 1x1 rectangle was to use image data but the performance difference was disputed. Another possibility would be rewriting the while statement as some other conditional loop but I'm not convinced that would give me the gains I'm looking for.
I'd consider myself a newbie so I'm happy to hear comments on any aspect of the code but what I'm really after is something which will massively increase performance. I suspect I'm being unreasonable in my expectations of what javascript in the browser can manage but I hope that I'm missing something significant and there are big gains to be found.
Thanks in advance,
Andrew
Using setInterval as an external loop construct slows the calculation down. You set it to 5 ms for a single pixel, however the entire Mandelbrot map calculation can be done within 1 second. So a single call to your function draws a pixel very quickly and then waits about the 99.99% of that 5 milliseconds.
I replaced
if (m<=(width+1))
with
while (m<=(width+1))
and removed the setInterval.
This way the entire calculation is done in one step, without refresh to the screen and without using setInterval as an external loop construct. I forked your script and modified it: http://jsfiddle.net/karatedog/2o4gjrv2/6/
In the script I modified the bailout condition from (x*x < 2 && y*y < 2) to (x < 2 && y < 2) just as I suggested in a previous comment and revealed some hidden pixel, check the difference!
I had indeed missed something significant. The timer I had used in order to prevent the page hanging while the set was plotted was limiting the code to one pixel every 5 milliseconds, in other words 200 pixels per second. Not clever!
I am now plotting one line at a time and it runs a lot better. Not yet real time but it is a lot quicker.
Thanks for the ideas. I will look into the escape condition to see what it should be.
A new jsfiddle with the revised code is here: http://jsfiddle.net/da1qyh9y/
and the for statement I've added is here:
function main_loop() {
if (m<=(width+1)) {
var n;
for (n=0; n<height; n=n+1) {

something is not right with console.time();

I am testing my javascript's speed using the console.time(); method, so it logs the loading time of a function on load.
if (window.devicePixelRatio > 1) {
var images = $('img');
console.time('testing');
var imagesObj = images.length;
for ( var i = 0; i < imagesObj; i++ ) {
var lowres = images.eq(i).attr('src'),
highres = lowres.replace(".", "_2x.");
images.eq(i).attr('src', highres);
}
console.timeEnd('testing');
}
But every time I reload the page it gives me a pretty different value. Should it have this behaviour? Shouldn't it give me a consistent value?
I have loaded it 5 times in a row and the values are the following:
5.051 ms
4.977 ms
8.009 ms
5.325 ms
6.951 ms
I am running this on XAMPP and in Chrome btw.
Thanks in advance
console.time/endTime is working correctly and the timing does indeed fluctuate by a tiny amount.
However, when dealing with such small numbers - the timings are all less than 1/100 of a second! - the deviation is irrelevant and can be influenced by a huge number of factors.
There is always variation, could be caused by a number of things.
server responding slightly slower (hat can also block other parts of the browser)
your processor is doing something in the mean time
your processor clocked down to save power
random latency in the network
browser extension doing something in the background
Also, Firefox has a system that intelligently tries to optimize javascript execution, in most cases it will perform better but it is somewhat random.

Javascript nested function call optimization

EDIT! I changed the answer to my own after a lot of follow-up research showed that there isn't a simple answer to my question. See below!
So, in a followup to my last question, I'm trying to get a better handle on best Javascript practices to optimize performance. For the following example, I'm testing in Chrome 28.0.1500.70 using the in-browser profiler.
I've got some math functions encapsulated in an object that are getting called a few hundred k-times a second and am trying to shave a bit of the execution time.
I've already done some optimization by making local copies of the parent objects locals as locals in the called functions themselves and got a decent (~16%) performance boost. However, when I did the same for calling another function from the parent object, i got a huge (~100%) performance increase.
The original setup was calcNeighbors calling fellow parent object function cirInd via this.cirInd.
Making a local var copy of cirInd and calling that instead gave a huge performance gain, less than half the execution time as before for calcNeighbors.
However, making cirInd an inline function in calcNeighbors caused a return to the same slower performance as calling it from the parent object.
I'm really perplexed by this. I suppose that it could be a quirk in Chrome's profiler (cirInd doesn't show up at all in the second case) but there is definitely a noticeable performance gain in the application when I use case 2.
Can someone explain why case 2 is so much faster than case 1 but more importantly, why case 3 seems to not give any performance gain?
The functions in question are here:
calling from parent object:
window.bgVars = {
<snip>
"cirInd": function(index, mod){
//returns modulus, array-wrapping value to implement circular array
if(index<0){index+=mod;}
return index%mod;
},
"calcNeighbors": function(rep){
var foo = this.xBlocks;
var grid = this.cGrid;
var mod = grid.length;
var cirInd = this.cirInd;
var neighbors = grid[this.cirInd(rep-foo-1, mod)] + grid[this.cirInd(rep-foo, mod)] + grid[this.cirInd(rep-foo+1, mod)] + grid[this.cirInd(rep-1, mod)] + grid[this.cirInd(rep+1, mod)] + grid[this.cirInd(rep+foo-1, mod)] + grid[this.cirInd(rep+foo, mod)] + grid[this.cirInd(rep+foo+1, mod)];
return neighbors;
},
<snip>
}
calling via local variable:
window.bgVars = {
<snip>
"cirInd": function(index, mod){
//returns modulus, array-wrapping value to implement circular array
if(index<0){index+=mod;}
return index%mod;
},
"calcNeighbors": function(rep){
var foo = this.xBlocks;
var grid = this.cGrid;
var mod = grid.length;
var cirInd = this.cirInd;
var neighbors = grid[cirInd(rep-foo-1, mod)] + grid[cirInd(rep-foo, mod)] + grid[cirInd(rep-foo+1, mod)] + grid[cirInd(rep-1, mod)] + grid[cirInd(rep+1, mod)] + grid[cirInd(rep+foo-1, mod)] + grid[cirInd(rep+foo, mod)] + grid[cirInd(rep+foo+1, mod)];
return neighbors;
},
<snip>
}
calling inline:
window.bgVars = {
<snip>
"calcNeighbors": function(rep){
var foo = this.xBlocks;
var grid = this.cGrid;
var mod = grid.length;
function cirInd(index, mod){
//returns modulus, array-wrapping value to implement circular array
if(index<0){index+=mod;}
return index%mod;
}
var neighbors = grid[cirInd(rep-foo-1, mod)] + grid[cirInd(rep-foo, mod)] + grid[cirInd(rep-foo+1, mod)] + grid[cirInd(rep-1, mod)] + grid[cirInd(rep+1, mod)] + grid[cirInd(rep+foo-1, mod)] + grid[cirInd(rep+foo, mod)] + grid[cirInd(rep+foo+1, mod)];
return neighbors;
},
<snip>
}
perhaps seeing #2 and #3 in a simplified view will help illustrate the object creation side-effects.
i believe this should make it obvious:
alls1=[];
alls2=[];
function inner1(){}
function outer1(){
if(alls1.indexOf(inner1)===-1){ alls1.push(inner1); }
}
function outer2(){
function inner2(){}
if(alls2.indexOf(inner2)===-1){ alls2.push(inner2); }
}
for(i=0;i<10;i++){
outer1();
outer2();
}
alert([ alls1.length, alls2.length ]); // shows: 1, 10
functions are objects, and making new objects is never free.
EDIT: expanding on #1 vs #2
again, the a simplified example will help illustrate:
function y(a,b){return a+b;}
var out={y:y};
var ob={
y:y,
x1: function(a){ return this.y(i,a);},
x2: function(a){ return y(i,a);},
x3: function(a){ return out.y(i,a);}
}
var mx=999999, times=[], d2,d3,d1=+new Date;
for(var i=0;i<mx;i++){ ob.x1(-i) }
times.push( (d2=+new Date)-d1 );
for(var i=0;i<mx;i++){ ob.x2(-i) }
times.push( (d3=+new Date)-d2 );
for(var i=0;i<mx;i++){ ob.x3(-i) }
times.push( (+new Date)-d3 );
alert(times); // my chrome's typical: [ 1000, 1149, 1151 ]
understand that there is more noise in a simple example, and closure is a big chunk of the overhead in all3, but the diffs between them is what's important.
in this demo you won't see the huge gain observed in your dynamic system, but you do see how close y and out.y profile compared to this.y, all else being equal.
the main point is that it's not the extra dot resolution per se that slows things down, as some have alluded to, it's specifically the "this" keyword in V8 that matters, otherwise out.y() would profile closer to this.y()...
firefox is a different story.
tracing allows this.whatever to be predicted, so all three profile within a bad dice roll of each other, on the same comp as chrome: [2548, 2532, 2545]...
The reason there is relatively extra time involved in number 1 should be obvious. You access the entire object scope, and then have to find a property.
Number 2 and 3 are both pointers to a function, so there is no seeking.
A very good resource for testing these types of situations is jsPerf, and I would highly recommend recreating the scenario there and running the test to see the exact differences and whether or not they are significant to you.
OK, I've been researching this issue for a while now and TL;DR - it's complicated.
Turns out that many performance questions really depend on the platform, browser and even minor browser revision number. And not by a little, either. There are many examples on jsPerf that show things such as 'for vs while; or 'typed arrays vs standard arrays' wildly swinging back and forth in terms of favorable execution speed with different browser releases. Presumably this is due to JIT optimization trade-offs.
Short answer to the general performance questions - just test everything in jsPerf. None of the suggestions I got in this thread were helpful in all cases. The JIT makes things complicated. This is particularly important if you have a background like mine and are used to C programs having certain rule-of-thumb coding patterns that tend to speed things up. Don't assume anything - just test it.
NOTE: many of the weird issues I listed in the original question weer due to using the default Chrome profiler. (e.g.: the profiler you get from the Ctl+Shift+I menu) If you are doing a lot of really fast loops (such as in graphics rendering), DO NOT USE THIS PROFILER. It has a time resolution of 1 ms which is much too coarse to do proper performance debugging.
In fact the ENTIRE issue I had with case 2 being so much faster than the others is entirely due to the profiler simply not 'seeing' many of the function calls and improperly reporting CPU percentages. Int he heat map, I could clearly see huge stretches where inner loop functions were firing but not being recorded by the profiler.
Solution: http://www.html5rocks.com/en/tutorials/games/abouttracing/#
Chrome has a less-obvious and much more powerful profiler built into about:tracing. It's got microsecond resolution, the ability o read code tags for sub-function resolution and is generally much more kickass. As soon as I started using this profiler, the results fell into line with what I saw on jsPerf and helped me reduce my rendering time by nearly half. How did I do that? Again, it wasn't simple. In some cases, calling out to subroutines helped, in others it didn't. Refactoring the whole rendering engine from an object literal to module pattern seemed to help a bit. Precalcing any multiplication operations in for loops did seem to have big effects. etc, etc.
Quick notes about the about:tracing profiler: Zooming and panning is with ASWD on the keyboard - that took me a while to figure out. Also, it profiles all tabs and operates in a tab outside the page being analyzed. So minimize the number of extraneous tabs you have open since they will clutter up the profiler view. Also, if testing Canvas apps, be sure to switch tabs over to the app since RequestAnimationFrame generally doesn't fire when a tab is not active and visible.

Better random function in JavaScript

I'm currently making a Conway's Game of Life reproduction in JavaScript and I've noticed that the function Math.random() is always returning a certain pattern. Here's a sample of a randomized result in a 100x100 grid:
Does anyone knows how to get better randomized numbers?
ApplyRandom: function() {
var $this = Evolution;
var total = $this.Settings.grid_x * $this.Settings.grid_y;
var range = parseInt(total * ($this.Settings.randomPercentage / 100));
for(var i = 0; i < total; i++) {
$this.Infos.grid[i] = false;
}
for(var i = 0; i < range; i++) {
var random = Math.floor((Math.random() * total) + 1);
$this.Infos.grid[random] = true;
}
$this.PrintGrid();
},
[UPDATE]
I've created a jsFiddle here: http://jsfiddle.net/5Xrs7/1/
[UPDATE]
It seems that Math.random() was OK after all (thanks raina77ow). Sorry folks! :(. If you are interested by the result, here's an updated version of the game: http://jsfiddle.net/sAKFQ/
(But I think there's some bugs left...)
This line in your code...
var position = (y * 10) + x;
... is what's causing this 'non-randomness'. It really should be...
var position = (y * $this.Settings.grid_x) + x;
I suppose 10 was the original size of this grid, that's why it's here. But that's clearly wrong: you should choose your position based on the current size of the grid.
As a sidenote, no offence, but I still consider the algorithm given in #JayC answer to be superior to yours. And it's quite easy to implement, just change two loops in ApplyRandom function to a single one:
var bias = $this.Settings.randomPercentage / 100;
for (var i = 0; i < total; i++) {
$this.Infos.grid[i] = Math.random() < bias;
}
With this change, you will no longer suffer from the side effect of reusing the same numbers in var random = Math.floor((Math.random() * total) + 1); line, which lowered the actual cell fillrate in your original code.
Math.random is a pseudo random method, that's why you're getting those results. A by pass i often use is to catch the mouse cursor position in order to add some salt to the Math.random results :
Math.random=(function(rand) {
var salt=0;
document.addEventListener('mousemove',function(event) {
salt=event.pageX*event.pageY;
});
return function() { return (rand()+(1/(1+salt)))%1; };
})(Math.random);
It's not completly random, but a bit more ;)
A better solution is probably not to randomly pick points and paint them black, but to go through each and every point, decide what the odds are that it should be filled, and then fill accordingly. (That is, if you want it on average %20 percent chance of it being filled, generate your random number r and fill when r < 0.2 I've seen a Life simulator in WebGL and that's kinda what it does to initialize...IIRC.
Edit: Here's another reason to consider alternate methods of painting. While randomly selecting pixels might end up in less work and less invocation of your random number generator, which might be a good thing, depending upon what you want. As it is, you seem to have selected a way that, at most some percentage of your pixels will be filled. IF you had kept track of the pixels being filled, and chose to fill another pixel if one was already filled, essentially all your doing is shuffling an exact percentage of black pixels among your white pixels. Do it my way, and the percentage of pixels selected will follow a binomial distribution. Sometimes the percentage filled will be a little more, sometimes a little less. The set of all shufflings is a strict subset of the possibilities generated this kind of picking (which, also strictly speaking, contains all possibilities for painting the board, just with astronomically low odds of getting most of them). Simply put, randomly choosing for every pixel would allow more variance.
Then again, I could modify the shuffle algorithm to pick a percentage of pixels based upon numbers generated from a binomial probability distribution function with a defined expected/mean value instead of the expected/mean value itself, and I honestly don't know that it'd be any different--at least theoretically--than running the odds for every pixel with the expected/mean value itself. There's a lot that could be done.
console.log(window.crypto.getRandomValues(new Uint8Array(32))); //return 32 random bytes
This return a random bytes with crypto-strength: https://developer.mozilla.org/en/docs/Web/API/Crypto/getRandomValues
You can try
JavaScript Crypto Library (BSD license). It is supposed to have a good random number generator. See here an example of usage.
Stanford JavaScript Crypto Library (BSD or GPL license). See documentation for random numbers.
For a discussion of strength of Math.random(), see this question.
The implementation of Math.random probably is based on a linear congruential generator, one weakness of which is that a random number depends on the earlier value, producing predictable patterns like this, depending on the choice of the constants in the algorithm. A famous example of the effect of poor choice of constants can be seen in RANDU.
The Mersenne Twister random number generator does not have this weakness. You can find an implementation of MT in JavaScript for example here: https://gist.github.com/banksean/300494
Update: Seeing your code, you have a problem in the code that renders the grid. This line:
var position = (y * 10) + x;
Should be:
var position = (y * grid_x) + x;
With this fix there is no discernible pattern.
You can using the part of sha256 hash from timestamp including nanoseconds:
console.log(window.performance.now()); //return nanoseconds inside
This can be encoded as string,
then you can get hash, using this: http://geraintluff.github.io/sha256/
salt = parseInt(sha256(previous_salt_string).substring(0, 12), 16);
//48 bits number < 2^53-1
then, using function from #nfroidure,
write gen_salt function before, use sha256 hash there,
and write gen_salt call to eventListener.
You can use sha256(previous_salt) + mouse coordinate, as string to get randomized hash.

Categories

Resources