JavaScript Requiring More Time Than Expected - javascript

I'm attempting to generate a set of values every millisecond.
By using window.performance.now(), I've determined that 1000 points (1 second worth of data) requires approximately 1 millisecond worth of processing time.
So... why is a log statement generated every 3-ish seconds rather than every 1 second when my condition for generating the statement is that I have generated 1000 points?
The code is included below. And here is a link to jsfiddle: http://jsfiddle.net/MWadX/421/
var c = 0;
var m = 0;
var t = 0;
var x = 0;
var y = 0;
window.setInterval(function()
{
var e;
var s;
if (c === 0)
{
m = Date.now();
}
s = window.performance.now();
x += Math.random();
y += Math.random();
c++;
e = window.performance.now();
t += e - s;
if (c !== 1000)
{
return;
}
console.log(t.toFixed(0).toString() + " milliseconds");
console.log((Date.now() - m).toFixed(0).toString() + " milliseconds");
c = 0;
m = 0;
t = 0;
x = 0;
y = 0;
}, 1);

setInterval, setTimeout, and Minimum Timeouts
According to the Mozilla Development Network setInterval and setTimeout have a minimum timeout. This value varies somewhat between browsers, but the HTML5 spec specifies a minimum timeout of 4ms, and this value is pretty well respected in browsers made after 2010. If you pass a lower timeout, it will be internally inflated to the minimum.
Inactive Tabs
In background tabs, the timeout is restricted even further to a minimum of 1000ms.

Related

lagrange algorithm in Javascript

I wrote a javascript version of Lagrange algorithm, but it kept going wrong when I run it, I don't know what went wrong.
I use this to calculate time.
When I pass a cSeconds as a variable, sometimes it returns a minus value which is obviously wrong...
function LagrangeForCat(cSeconds){
var y = [2592000,7776000,15552000,31104000,93312000,155520000,279936000,404352000,528768000,622080000,715392000,870912000,995328000,1119744000,1244160000,1368576000,1492992000,1617408000,1741824000,1866240000,1990656000,2115072000,2239488000,2363904000,2488320000,2612736000,2737152000,2861568000,2985984000,3110400000,3234816000,3359232000,3483648000,3608064000];
var x = [604800,1209600,1814400,2592000,5184000,7776000,15552000,23328000,31104000,46656000,62208000,93312000,124416000,155520000,186624000,217728000,248832000,279936000,311040000,342144000,373248000,404352000,435456000,466560000,497664000,528768000,559872000,590976000,622080000,653184000,684288000,715392000,746496000,777600000];
var l = 0.0;
for (var j = 0; j < 34; j++) {
var s = 1.0;
for (var i = 0; i < 34; i++) {
if (i != j)
s = s * ((cSeconds - x[i]) / (x[j] - x[i]));
}
l = l + s * y[j];
}
var result = l / (24 * 60 * 60);
var Days = Math.floor(result);
//get float seconds data
var littleData = String(result).split(".")[1];
var floatData = parseFloat("0."+littleData);
var second = floatData *60*60*24;
var hours = Math.floor(second/(60*60));
var minutes = Math.floor(second % 3600/60);
var seconds = Math.floor(second % 3600) % 60;
var returnData = {days:Days,hours: hours + ':' + minutes + ':' + seconds}
return returnData;
}
I don't believe the issue is with your code but with the data set.
I tried a few things, for instance if you have cSeconds = one of the x values, then you get the correct result (I could check that it was equal to the matching y value).
I put all the data in open office and drew the graph it was like the square root function but more extreme (the 'straight' part look very straight) then I remembered that when you interpolate you usually get a polynomial that crosses the points you want but can be very wild outside between the point.
To test my theory I modified the algorithm to control at which x/y index to start and tried for all the values:
for (let i = 0; i < 35; ++i) {
LagrangeForCat(63119321, i, 34)
}
Together with a console.log inside LagrangeForCat it gives me the interpolated y value if I use all the x/y arrays (i=0), if I ignore the first x/y point (i=1), the first two (i=2), ...
00-34 -6850462776.278063
01-34 549996977.0003194
02-34 718950902.7592317
03-34 723883771.1443908
04-34 723161627.795225
05-34 721857113.1756063
06-34 721134873.0889213
07-34 720845478.4754647
08-34 720897871.7910147
09-34 721241470.2886044
10-34 722280314.1033486
11-34 750141284.0070543
12-34 750141262.289736
13-34 750141431.2562406
14-34 750141089.6980047
15-34 750141668.8768387
16-34 750142353.3267975
17-34 750141039.138794
18-34 750141836.251831
19-34 750138039.6240234
20-34 750141696.7529297
21-34 750141120.300293
22-34 750141960.4248047
23-34 750140874.0966797
24-34 750141337.5
25-34 750141237.4694824
26-34 750141289.2150879
27-34 750141282.5408936
28-34 750141284.2094421
29-34 750141283.987999
30-34 750141284.0002298
31-34 750141284.0000689
32-34 750141283.9999985
33-34 3608064000
34-34 0
Exclude 33-34 and 34-34 (there's just not enough data to interpolate).
For the example x=63119321 you'd expect y to be between 715392000 and 870912000 you can see that if you ignore the first 2-3 values the interpolation is "believable", if you ignore more values you interpolate based off the very straight part of the curve (see how consistent the interpolation is from 11-34 onward).
I use to work on a project where interpolation was needed, to avoid those pathological cases we opted for linear interpolation trading accuracy for security (and we could generate all the x/y points we wanted). In your case I'd try to use a smaller set, for instance only two values smaller than cSeconds and two greater like this:
function LagrangeForCat(cSeconds) {
var x = [...];
var y = [...];
let begin = 0,
end = 34
for (let i = 0; i < 34; ++i) {
if (cSeconds < x[i]) {
begin = (i < 3) ? 0 : i - 2
end = (i > (x.length - 1)) ? x.length : i + 1
break
}
}
let result = 0.0;
for (let i = begin; i < end; ++i) {
let term = y[i] / (24 * 60 * 60)
for (let j = begin; j < end; ++j) {
if (i != j)
term *= (cSeconds - x[j]) / (x[i] - x[j]);
}
result += term
}
var Days = Math.floor(result);
// I didn't change the rest of the function didn't even looked at it
}
If you find this answer useful please consider marking it as answered it'd be much appreciated.

Why is precomputing sin(x) *slower* than using Math.sin() in Javascript?

I've found what appears to be an interesting anomaly in JavaScript. Which centres upon my attempts to speed up trigonometric transformation calculations by precomputing sin(x) and cos(x), and simply referencing the precomputed values.
Intuitively, one would expect pre-computation to be faster than evaluating the Math.sin() and Math.cos() functions each time. Especially if your application design is going to use only a restricted set of values for the argument of the trig functions (in my case, integer degrees in the interval [0°, 360°), which is sufficient for my purposes here).
So, I ran a little test. After pre-computing the values of sin(x) and cos(x), storing them in 360-element arrays, I wrote a short test function, activated by a button in a simple test HTML page, to compare the speed of the two approaches. One loop simply multiplies a value by the pre-computed array element value, whilst the other loop multiplies a value by Math.sin().
My expectation was that the pre-computed loop would be noticeably faster than the loop involving a function call to a trig function. To my surprise, the pre-computed loop was slower.
Here's the test function I wrote:
function MyTest()
{
var ITERATION_COUNT = 1000000;
var angle = Math.floor(Math.random() * 360);
var test1 = 200 * sinArray[angle];
var test2 = 200 * cosArray[angle];
var ref = document.getElementById("Output1");
var outData = "Test 1 : " + test1.toString().trim() + "<br><br>";
outData += "Test 2 : "+test2.toString().trim() + "<br><br>";
var time1 = new Date(); //Time at the start of the test
for (var i=0; i<ITERATION_COUNT; i++)
{
var angle = Math.floor(Math.random() * 360);
var test3 = (200 * sinArray[angle]);
//End i loop
}
var time2 = new Date();
//This somewhat unwieldy procedure is how we find out the elapsed time ...
var msec1 = (time1.getUTCSeconds() * 1000) + time1.getUTCMilliseconds();
var msec2 = (time2.getUTCSeconds() * 1000) + time2.getUTCMilliseconds();
var elapsed1 = msec2 - msec1;
outData += "Test 3 : Elapsed time is " + elapsed1.toString().trim() + " milliseconds<br><br>";
//Now comparison test with the same number of sin() calls ...
var time1 = new Date();
for (var i=0; i<ITERATION_COUNT; i++)
{
var angle = Math.floor(Math.random() * 360);
var test3 = (200 * Math.sin((Math.PI * angle) / 180));
//End i loop
}
var time2 = new Date();
var msec1 = (time1.getUTCSeconds() * 1000) + time1.getUTCMilliseconds();
var msec2 = (time2.getUTCSeconds() * 1000) + time2.getUTCMilliseconds();
var elapsed2 = msec2 - msec1;
outData += "Test 4 : Elapsed time is " + elapsed2.toString().trim() + " milliseconds<br><br>";
ref.innerHTML = outData;
//End function
}
My motivation for the above, was that multiplying by a pre-computed value fetched from an array would be faster than invoking a function call to a trig function, but the results I obtain are interestingly anomalous.
Some sample runs yield the following results (Test 3 is the pre-computed elapsed time, Test 4 the Math.sin() elapsed time):
Run 1:
Test 3 : Elapsed time is 153 milliseconds
Test 4 : Elapsed time is 67 milliseconds
Run 2:
Test 3 : Elapsed time is 167 milliseconds
Test 4 : Elapsed time is 69 milliseconds
Run 3 :
Test 3 : Elapsed time is 265 milliseconds
Test 4 : Elapsed time is 107 milliseconds
Run 4:
Test 3 : Elapsed time is 162 milliseconds
Test 4 : Elapsed time is 69 milliseconds
Why is invoking a trig function twice as fast as referencing a precomputed value from an array, when the precomputed approach, intuitively at least, should be the faster by an appreciable margin? All the more so because I'm using integer arguments to index the array in the precomputed loop, whilst the function call loop also includes an extra calculation to convert from degrees to radians?
There's something interesting happening here, but at the moment, I'm not sure what. Usually, array accesses to precomputed data are a lot faster than calling intricate trig functions (or at least, they were back in the days when I coded similar code in assembler!), but JavaScript seems to turn this on its head. The only reason I can think of, is that JavaScript adds a lot of overhead to array accesses behind the scenes, but if this were so, this would impact upon a lot of other code, that appears to run at perfectly reasonable speed.
So, what exactly is going on here?
I'm running this code in Google Chrome:
Version 60.0.3112.101 (Official Build) (64-bit)
running on Windows 7 64-bit. I haven't yet tried it in Firefox, to see if the same anomalous results appear there, but that's next on the to-do list.
Anyone with a deep understanding of the inner workings of JavaScript engines, please help!
Optimiser has skewed the results.
Two identical test functions, well almost.
Run them in a benchmark and the results are surprising.
{
func : function (){
var i,a,b;
D2R = 180 / Math.PI
b = 0;
for (i = 0; i < count; i++ ) {
// single test start
a = (Math.random() * 360) | 0;
b += Math.sin(a * D2R);
// single test end
}
},
name : "summed",
},{
func : function (){
var i,a,b;
D2R = 180 / Math.PI;
b = 0;
for (i = 0; i < count; i++ ) {
// single test start
a = (Math.random() * 360) | 0;
b = Math.sin(a * D2R);
// single test end
}
},
name : "unsummed",
},
The results
=======================================
Performance test. : Optimiser check.
Use strict....... : false
Duplicates....... : 4
Samples per cycle : 100
Tests per Sample. : 10000
---------------------------------------------
Test : 'summed'
Calibrated Mean : 173µs ±1µs (*1) 11160 samples 57,803,468 TPS
---------------------------------------------
Test : 'unsummed'
Calibrated Mean : 0µs ±1µs (*1) 11063 samples Invalid TPS
----------------------------------------
Calibration zero : 140µs ±0µs (*)
(*) Error rate approximation does not represent the variance.
(*1) For calibrated results Error rate is Test Error + Calibration Error.
TPS is Tests per second as a calculated value not actual test per second.
The benchmarker barely picked up any time for the un-summed test (Had to force it to complete).
The optimiser knows that only the last result of the loop for the unsummed test is needed. It only does for the last iteration all the other results are not used so why do them.
Benchmarking in javascript is full of catches. Use a quality benchmarker, and know what the optimiser can do.
Sin and lookup test.
Testing array and sin. To be fair to sin I do not do a deg to radians conversion.
tests : [{
func : function (){
var i,a,b;
b=0;
for (i = 0; i < count; i++ ) {
a = (Math.random() * 360) | 0;
b += a;
}
},
name : "Calibration",
},{
func : function (){
var i,a,b;
b = 0;
for (i = 0; i < count; i++ ) {
a = (Math.random() * 360) | 0;
b += array[a];
}
},
name : "lookup",
},{
func : function (){
var i,a,b;
b = 0;
for (i = 0; i < count; i++ ) {
a = (Math.random() * 360) | 0;
b += Math.sin(a);
}
},
name : "Sin",
}
],
And the results
=======================================
Performance test. : Lookup compare to calculate sin.
Use strict....... : false
Data view........ : false
Duplicates....... : 4
Cycles........... : 1055
Samples per cycle : 100
Tests per Sample. : 10000
---------------------------------------------
Test : 'Calibration'
Calibrator Mean : 107µs ±1µs (*) 34921 samples
---------------------------------------------
Test : 'lookup'
Calibrated Mean : 6µs ±1µs (*1) 35342 samples 1,666,666,667TPS
---------------------------------------------
Test : 'Sin'
Calibrated Mean : 169µs ±1µs (*1) 35237 samples 59,171,598TPS
-All ----------------------------------------
Mean : 0.166ms Totals time : 17481.165ms 105500 samples
Calibration zero : 107µs ±1µs (*);
(*) Error rate approximation does not represent the variance.
(*1) For calibrated results Error rate is Test Error + Calibration Error.
TPS is Tests per second as a calculated value not actual test per second.
Again had the force completions as the lookup was too close to the error rate. But the calibrated lookup is almost a perfect match to the clock speed ??? coincidence.. I am not sure.
I believe this to be a benchmark issue on your side.
var countElement = document.getElementById('count');
var result1Element = document.getElementById('result1');
var result2Element = document.getElementById('result2');
var result3Element = document.getElementById('result3');
var floatArray = new Array(360);
var typedArray = new Float64Array(360);
var typedArray2 = new Float32Array(360);
function init() {
for (var i = 0; i < 360; i++) {
floatArray[i] = typedArray[i] = Math.sin(i * Math.PI / 180);
}
countElement.addEventListener('change', reset);
document.querySelector('form').addEventListener('submit', run);
}
function test1(count) {
var start = Date.now();
var sum = 0;
for (var i = 0; i < count; i++) {
for (var j = 0; j < 360; j++) {
sum += Math.sin(j * Math.PI / 180);
}
}
var end = Date.now();
var result1 = "sum=" + sum + "; time=" + (end - start);
result1Element.textContent = result1;
}
function test2(count) {
var start = Date.now();
var sum = 0;
for (var i = 0; i < count; i++) {
for (var j = 0; j < 360; j++) {
sum += floatArray[j];
}
}
var end = Date.now();
var result2 = "sum=" + sum + "; time=" + (end - start);
result2Element.textContent = result2;
}
function test3(count) {
var start = Date.now();
var sum = 0;
for (var i = 0; i < count; i++) {
for (var j = 0; j < 360; j++) {
sum += typedArray[j];
}
}
var end = Date.now();
var result3 = "sum=" + sum + "; time=" + (end - start);
result3Element.textContent = result3;
}
function reset() {
result1Element.textContent = '';
result2Element.textContent = '';
result3Element.textContent = '';
}
function run(ev) {
ev.preventDefault();
reset();
var count = countElement.valueAsNumber;
setTimeout(test1, 0, count);
setTimeout(test2, 0, count);
setTimeout(test3, 0, count);
}
init();
<form>
<input id="count" type="number" min="1" value="100000">
<input id="run" type="submit" value="Run">
</form>
<dl>
<dt><tt>Math.sin()</tt></dt>
<dd>Result: <span id="result1"></span></dd>
<dt><tt>Array()</tt></dt>
<dd>Result: <span id="result2"></span></dd>
<dt><tt>Float64Array()</tt></dt>
<dd>Result: <span id="result3"></span></dd>
</dl>
In my testing, an array is unquestionably faster than an uncached loop, and a typed array is marginally faster than that. Typed arrays avoid the need for boxing and unboxing the number between the array and the computation. The results I see are:
Math.sin(): 652ms
Array(): 41ms
Float64Array(): 37ms
Note that I am summing and including the results, to prevent the JIT from optimizing out the unused pure function. Also, Date.now() instead of creating seconds+millis yourself.
I agree with that the issue may be down to how you have initialised the pre-computed array
Jsbench shows the precomputed array to be 13% faster than using Math.sin()
Precomputed array: 86% (fastest 1480 ms)
Math.sin(): 100% (1718 ms)
Precomputed Typed Array: 87% (1493 ms)
Hope this helps!

Ease time between firing specific number of timeouts in a specific period of time

It's kind of math problem. I want to fire specific number of setTimeout (the number is based on an array length) in a specific period of time (say, 5 seconds).
The first setTimeout should start at 0 sec. and the last at 5 sec.. All timeouts between should start with an ease-in effect, so that each timeout starts faster.
There's an example which ilustrates what I want to achieve exactly.
I'm struggling around this line:
next += timePeriod/3.52/(i+1);
which works almost perfect in demo example (for any timePeriod), but obviously it doesn't work for a different letters.length as I have used static number 3.52.
How do I calculate next?
var letters = [
'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T'
];
var div = $('#container');
var timePeriod = 5000; // 5 seconds;
var perLetter = timePeriod/(letters.length-1); // it gives equal time between letters
var next = 0;
for(var i=0; i<letters.length; i++){
setTimeout(function(letter){
//div.append('<span class="letter">' + letter + '</span>');
// Used "|" instead of letter, For better redability:
div.append('<span class="letter">|</span>');
}, next, letters[i]);
// Can't find the logic here:
next += timePeriod/3.52/(i+1);
};
///////////////// FOR DEMO: ///////////////
var sec = timePeriod/1000;
var secondsInterval = setInterval(seconds, 1000);
var demoInterval = setInterval(function(){
sec >= 0 || clearInterval(demoInterval);
div.append('\'');
}, 30);
function seconds(){
sec || clearInterval(secondsInterval);
$('#sec').text(sec-- || 'DONE');
}
seconds();
.letter{
color : red;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<span id=container></span>
<span id=sec class=letter></span>
var steps = letters.length;
var target = timePeriod;
function easeOutQuad(t, b, c, d) {
t /= d;
return -c * t*(t-2) + b;
};
var arrayOfTimeouts = new Array(steps);
var n;
var prev = 0;
for(var i = 1; i <= steps; i++){
n = easeOutQuad(i, 0.0, target, steps);
arrayOfTimeouts[i-1] = n-prev;
prev = n;
}
This one should work with any input value.
fiddle
Note that the graph appears to be slightly too fast but I believe that discrepancy to be a product of timing imperfections, as the sum of my array equals the timePeriod exactly.
more on easing equations
Here's a solution based on a geometric series. It's a bit goofy but it works. It generates an array with your timeout values.
Steps = size of your array.
Target = the total time.
var steps = 50;
var target = 5000;
var fraction = 1.5 + steps / 7;
var ratio = (fraction-1) / fraction;
var n = target / fraction;
var sum = 0;
var arrayOfTimeouts = new Array(steps);
for(var i = 0; i < steps; i++){
sum += n;
arrayOfTimeouts[i] = n;
n *= ratio;
}
console.log(arrayOfTimeouts, sum);

Using extra var improves performance in javascript / under Chrome 50.0.2661.102 (64-bit) Mac

Disclaimer: I'm aware that it might look as premature optimization, however the question is interesting by itself, so thank you for your patience.
I have two jsfiddles used to measure performance of different kinds of scenarios, and I've stumbled upon one very interesting thing:
https://jsfiddle.net/lu4x/k74oq0de/51/
function timeout() {
var x = 0; // <------------------------------ Using this variable
for (let iteration = 0; iteration < 1; iteration++) {
let count = 0;
let timeA = performance.now();
for (let i = 0; i < map.length; i++) {
x = map[i]; // <------------------------------ and this assign
count += x;
}
let timeB = performance.now();
let delta = timeB - timeA;
average = avg(average, run, delta);
minimum = Math.min(minimum, delta);
if (++run % 100 === 0) log([run, minimum, average]);
}
if (iterate) setTimeout(timeout, 0);
}
Yields this output on my machine:
1000,19.204999999999927,22.00017499999995
900,19.204999999999927,22.074283333333327
800,19.204999999999927,22.13556875000003
700,19.204999999999927,22.222578571428635
600,19.204999999999927,22.271575000000077
500,19.24000000000069,22.486130000000085
400,19.24000000000069,22.41451250000006
300,19.24000000000069,22.209600000000002
200,19.280000000000655,22.322875000000046
100,19.38000000000011,22.719250000000034
While the following fiddle:
https://jsfiddle.net/lu4x/k74oq0de/52/
function timeout() {
for (let iteration = 0; iteration < 1; iteration++) {
let count = 0;
let timeA = performance.now();
for (let i = 0; i < map.length; i++) {
count += map[i]; // <--------------------- Not using anything
}
let timeB = performance.now();
let delta = timeB - timeA;
average = avg(average, run, delta);
minimum = Math.min(minimum, delta);
if (++run % 100 === 0) log([run, minimum, average]);
}
if (iterate) setTimeout(timeout, 0);
}
Yields to the following output:
1000,21.125,24.086139999999997
900,21.125,24.086711111111097
800,21.125,24.068375000000003
700,21.125,24.05558571428569
600,21.165000000000873,24.102566666666636
500,21.165000000000873,24.129959999999983
400,21.165000000000873,24.10507499999996
300,21.170000000000073,24.021016666666657
200,21.170000000000073,24.03779999999997
100,21.300000000000182,24.037549999999992
I've did some other testing and having a difference of 21.125 - 19.204999999999927 = 1.92 milliseconds. And for such kind of test it is dramatic change, the amount of penalty is close to one dereferencing (i.e. x.y) operation.
Why things happen this way.
P.S.
If you put var x inside the loop it will be as slow as not using var x. I'm flattered...
==============
Edit
Here is the same performance test using benchmark.js
https://jsfiddle.net/lu4x/k74oq0de/60/
Benchmark.js confirms what was shown in previous tests, the difference is significant

Increment from zero to number in a set time

I am trying to increment from 0 to a number (can be any number from 2000 to 12345600000) within a certain duration (1000 ms, 5000 ms, etc). I have created the following:
http://jsfiddle.net/fmpeyton/c9u2sky8/
var counterElement = $(".lg-number");
var counterTotal = parseInt(counterElement.text()/*.replace(/,/g, "")*/);
var duration = 1000;
var animationInterval = duration/counterTotal;
counterElement.text("0");
var numberIncrementer = setInterval(function(){
var currentCounterNumber = parseInt(counterElement.text()/*.replace(/,/g, "")*/);
if (currentCounterNumber < counterTotal){
currentCounterNumber += Math.ceil(counterTotal/duration);
// convert number back to comma format
// currentCounterNumber = addCommas(currentCounterNumber);
counterElement.text(currentCounterNumber);
} else {
counterElement.text(counterTotal);
clearInterval(numberIncrementer);
}
console.log("run incrementer");
}, animationInterval);
function addCommas(number){
for (var i = number.length - 3; i > 0; i -= 3)
number = number.slice(0, i) + ',' + number.slice(i);
return number;
}
And this somewhat works, but it does not respect the duration. I.e. if you increase the number from 1000 to 1000000000, they both take different amounts of time to reach the destination number.
How can I increment from zero to a number in a specific time frame?
As #Mouser pointed out, the issue is that the animationInterval can't be too small (the actual minimum threshold will vary based on the browser and platform). Instead of varying the interval, vary the increment to the counter:
var counterElement = $(".lg-number");
var counterTotal = parseInt(counterElement.text()/*.replace(/,/g, "")*/);
var duration = 1000;
var animationInterval = 10;
var startTime = Date.now();
counterElement.text("0");
var numberIncrementer = setInterval(function(){
var elapsed = Date.now() - startTime;
var currentCounterNumber = Math.ceil(elapsed / duration * counterTotal);
if (currentCounterNumber < counterTotal){
counterElement.text(currentCounterNumber);
} else {
counterElement.text(counterTotal);
clearInterval(numberIncrementer);
}
console.log("run incrementer");
}, animationInterval);
I played around with your fiddle and found that the delay needs to be higher. At 8ms or 16ms, it is accurate enough to handle a second, but not accurate enough to handle half a second. From experimenting, it seems like a delay of 64ms is small enough to seem like it's incrementing smoothly, but big enough to have an accurate effect.
The difference is that the current number is calculated based on the process rather than directly manipulated.
var counterElement = $(".lg-number");
var counterTotal = parseInt(counterElement.data('total'));
var interval = 0;
var duration = parseInt(counterElement.data('duration'));;
var delay = 64
var numberIncrementer = setInterval(function(){
var currentCounterNumber = 0;
interval += delay;
if (interval <= duration){
var progress = interval / duration;
currentCounterNumber = Math.round(progress * counterTotal);
} else {
currentCounterNumber = counterTotal
clearInterval(numberIncrementer);
}
counterElement.text(currentCounterNumber);
}, delay);
http://jsfiddle.net/c9u2sky8/5/
Also: Javascript timers are not perfectly accurate. But this should be accurate enough for UI use cases.

Categories

Resources