Accessing local variable doesn't improve performance - javascript
****Clarification**: I'm not looking for the fastest code or optimization. I would like to understand why some code that seem to not be optimized or optimal run in fact in general consistently faster.
The short version
Why is this code:
var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4;
More performant than this one?
var index = Math.floor(ref_index) * 4;
The long version
This week, the author of Impact js published an article about some rendering issue:
http://www.phoboslab.org/log/2012/09/drawing-pixels-is-hard
In the article there was the source of a function to scale an image by accessing pixels in the canvas. I wanted to suggest some traditional ways to optimize this kind of code so that the scaling would be shorter at loading time. But after testing it my result was most of the time worst that the original function.
Guessing this was the JavaScript engine that was doing some smart optimization I tried to understand a bit more what was going on so I did a bunch of test. But my results are quite confusing and I would need some help to understand what's going on.
I have a test page here:
http://www.mx981.com/stuff/resize_bench/test.html
jsPerf: http://jsperf.com/local-variable-due-to-the-scope-lookup
To start the test, click the picture and the results will appear in the console.
There are three different versions:
The original code:
for( var y = 0; y < heightScaled; y++ ) {
for( var x = 0; x < widthScaled; x++ ) {
var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4;
var indexScaled = (y * widthScaled + x) * 4;
scaledPixels.data[ indexScaled ] = origPixels.data[ index ];
scaledPixels.data[ indexScaled+1 ] = origPixels.data[ index+1 ];
scaledPixels.data[ indexScaled+2 ] = origPixels.data[ index+2 ];
scaledPixels.data[ indexScaled+3 ] = origPixels.data[ index+3 ];
}
}
jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance
One of my attempt to optimize it:
var ref_index = 0;
var ref_indexScaled = 0
var ref_step = 1 / scale;
for( var y = 0; y < heightScaled; y++ ) {
for( var x = 0; x < widthScaled; x++ ) {
var index = Math.floor(ref_index) * 4;
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ];
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ];
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ];
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ];
ref_index+= ref_step;
}
}
jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance
The same optimized code but with recalculating the index variable each time (Hybrid)
var ref_index = 0;
var ref_indexScaled = 0
var ref_step = 1 / scale;
for( var y = 0; y < heightScaled; y++ ) {
for( var x = 0; x < widthScaled; x++ ) {
var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4;
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ];
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ];
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ];
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ];
ref_index+= ref_step;
}
}
jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance
The only difference in the two last one is the calculation of the 'index' variable.
And to my surprise the optimized version is slower in most browsers (except opera).
Results of personal testing (not the jsPerf tests):
Opera
Original: 8668ms
Optimized: 932ms
Hybrid: 8696ms
Chrome
Original: 139ms
Optimized: 145ms
Hybrid: 136ms
Safari
Original: 433ms
Optimized: 853ms
Hybrid: 451ms
Firefox
Original: 343ms
Optimized: 422ms
Hybrid: 350ms
After digging around, it seems an usual good practice is to access mainly local variable due to the scope lookup. Because The optimized version only call one local variable it should be faster that the Hybrid code which call multiple variable and object in addition to the various operation involved.
So why the "optimized" version is slower?
I thought that it might be because some JavaScript engine don't optimize the Optimized version because it is not hot enough but after using --trace-opt in chrome, it seems all version are properly compiled by V8.
At this point I am a bit clueless and wonder if somebody would know what is going on?
I did also some more test cases in this page:
http://www.mx981.com/stuff/resize_bench/index.html
As silly as it sounds, the Math.whatever() calls might be tricky to optimize and inline for the JS engines. Whenever possible, prefer an arithmetic operation (not a function call) to achieve the same result.
Adding the following 4th test to http://www.mx981.com/stuff/resize_bench/test.html
// Test 4
console.log('- p01 -');
start = new Date().getTime();
for (i=0; i<nbloop; i++) {
var index = 0;
var ref_indexScaled = 0
var ref_step=1/scale;
for( var y = 0; y < heightScaled; y++ ) {
for( var x = 0; x < widthScaled; x++ ) {
var z= index<<2;
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ z++ ];
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ z++ ];
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ z++ ];
scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ z++ ];
index+= ref_step;
}
}
}
end = new Date().getTime();
console.log((end-start)+'ms');
Yields the following numbers in Opera Next:
Original - 2311ms
refactor - 112ms
hybrid - 2371ms
p01 - 112ms
Using some basic techniques you can highly optimize performance:
When running multiple loops in loops, use:
while (i--) {
/* some code here */
}
... where i is a value greater than 0.
Caching variables / localizing variables appropriately to minimize calculations. For larger calculations this means to place part of the calculation at the right layer of abstraction.
Re-using variables (re-initialization overhead can become a problem for large amounts of data processing). NOTE: This IS a bad programming design principle but a great performance principle!
Reduce property depth. Using object.property kills performance vs just a var containing "object_propertyvalue".
Using those principles you can achieve better performance. Now from a high level, looking at the article you derived this function from, it was flawed in a few ways. So to really optimize the full function instead of just the one line you stated:
function resize_Test5( img, scale ) {
// Takes an image and a scaling factor and returns the scaled image
// The original image is drawn into an offscreen canvas of the same size
// and copied, pixel by pixel into another offscreen canvas with the
// new size.
var widthScaled = img.width * scale;
var heightScaled = img.height * scale;
var orig = document.createElement('canvas');
orig.width = img.width;
orig.height = img.height;
var origCtx = orig.getContext('2d');
origCtx.drawImage(img, 0, 0);
var origPixels = origCtx.getImageData(0, 0, img.width, img.height);
var scaled = document.createElement('canvas');
scaled.width = widthScaled;
scaled.height = heightScaled;
var scaledCtx = scaled.getContext('2d');
var scaledPixels = scaledCtx.getImageData( 0, 0, widthScaled, heightScaled );
// optimization start
var old_list = origPixels.data;
var image_width = img.width;
var h = heightScaled;
var w = widthScaled;
var index_old;
var index_new;
var h_scale;
var new_list = [];
var pre_index_new;
while(h--){
h_scale = Math.floor(h / scale) * image_width;
pre_index_new = h * widthScaled;
while(w--){
index_old = (h_scale + Math.floor(w / scale)) * 4;
index_new = (pre_index_new + w) * 4;
new_list[ index_new ] = old_list[ index_old ];
new_list[ index_new + 1 ] = old_list[ index_old + 1 ];
new_list[ index_new + 2 ] = old_list[ index_old + 2 ];
new_list[ index_new + 3 ] = old_list[ index_old + 3 ];
}
}
scaledPixels.data = new_list;
// optimization stop
scaledCtx.putImageData( scaledPixels, 0, 0 );
return scaled;
}
Related
javascript check if element rect hitting another with same class
I want to put like 30 of <img class="anger"> elements with random size and random position inside the <div> container, but none of the .anger hitting one another. Is it possible? This is my code so far: function loadAngers() { var wrp = '#angerContainer'; //the container var rectAvatar = $('#picAvatar')[0].getBoundingClientRect(); //rect of user avatar var rectWrapper = $(wrp)[0].getBoundingClientRect(); //rect of container listCoorditaes = [[ rectAvatar.width, (rectAvatar.left+rectAvatar.right)/2, (rectAvatar.top+rectAvatar.bottom)/2 ]]; $(wrp).find('.anger').remove(); for (var i=0; i<listAnger.length; i++) { var verb = listAnger[i].replace('assets/img/verb/','').replace('.png','').replace('-',' '); var anger = $('<img src="'+listAnger[i]+'" class="anger hvr-'+getRandom(listAnim)+'" data-verb="'+verb+'" style="position:absolute">'); var paddingX = 100; var paddingY = 200; var wideX = rectWrapper.width - paddingX; var wideY = rectWrapper.height - paddingY - rectAvatar.top; var width = Math.round(30 + Math.random() * 70); var left; var top; var x; var y; var tubrukan; var coba = 0; do { //find the best coordinate tubrukan = false; coba++; x = Math.round(Math.random() * wideX) + paddingX/2; y = Math.round(Math.random() * wideY) + paddingY/2 + rectAvatar.top; left = x - width/2; top = y - width/2; for (var j=0; j<=i; j++) { var cekW = listCoorditaes[j][0]; var cekX = listCoorditaes[j][1]; var cekY = listCoorditaes[j][2]; var difX = Math.abs( x - cekX ); var difY = Math.abs( y - cekY ); if (difX < cekW && difY < cekW) { tubrukan = true; break; } } } while(tubrukan && coba<3); //as I give up for eternal loop, I limit the loop with 3 tries. listCoorditaes.push([width,x,y]); anger.css('width',width+'px'); anger.css('left',left); anger.css('top',top); anger.appendTo(wrp); } } This is the current result: As we can see, the elements still overlap the other because I limit the loop. If I remove the limit, browser will endure endless loop and become not responding. Do you have another better way to achieve it? UPDATE: My bad, I set the container height only 800px, that's why it can't contain all those <img> without overlapping, thus eternal loop happen. I made it to 2000px to see if it works. But the problem is still takes uncertain number of loops to find the best coordinate, so I still put limit to the loop thus overlap still happen several times.
Can't get Lotka-Volterra equations to oscillate stable with math.js
I'm trying to implement a simple Lotka-Volterra system in JavaScript, but get different result from what I see in academic papers and slides. This is my equations: sim2.eval("dxdt(x, y) = (2 * x) - (x * y)"); sim2.eval("dydt(x, y) = (-0.25 * y) + (x * y)"); using coefficients a = 2, b = 1, c = 0.25 and d = 1. Yet, my result looks like this: when I expected a stable oscillation as seen in these PDF slides: Could it be the implementation of ndsolve that causes this? Or a machine error in JavaScript due to floating-point arithmetic?
Disregard, the error was simply using a too big evaluation step (dt = 0.1, must be 0.01 at least). The numerical method used is known for this problem.
For serious purposes use a higher order method, the minimum is fixed step classical Runge-Kutta. Then you can also use dt=0.1, it is stable for multiple periods, I tried tfinal=300 without problems. However you will see the step size in the graph as it is visibly piecewise linear. This is much reduced with half the step size, dt=0.05. function odesolveRK4(f, x0, dt, tmax) { var n = f.size()[0]; // Number of variables var x = x0.clone(),xh=[]; // Current values of variables var dxdt = [], k1=[], k2=[], k3=[], k4=[]; // Temporary variable to hold time-derivatives var result = []; // Contains entire solution var nsteps = math.divide(tmax, dt); // Number of time steps dt2 = math.divide(dt,2); dt6 = math.divide(dt,6); for(var i=0; i<nsteps; i++) { // compute the 4 stages if the classical order-4 Runge-Kutta method k1 = f.map(function(fj) {return fj.apply(null, x.toArray()); } ); xh = math.add(x, math.multiply(k1, dt2)); k2 = f.map(function(fj) {return fj.apply(null, xh.toArray()); } ); xh = math.add(x, math.multiply(k2, dt2)); k3 = f.map(function(fj) {return fj.apply(null, xh.toArray()); } ); xh = math.add(x, math.multiply(k3, dt)); k4 = f.map(function(fj) {return fj.apply(null, xh.toArray()); } ); x = math.add(x, math.multiply(math.add(math.add(k1,k4), math.multiply(math.add(k2,k3),2)), dt6)) if( 0==i%50) console.log("%3d %o %o",i,dt,x.toString()); result.push(x.clone()); } return math.matrix(result); } math.import({odesolveRK4:odesolveRK4});
How to reduce a data graph but keeping the extremes
I have a database that has got a month full of datasets in 10min intervals. (So a dataset for every 10min) Now I want to show that data on three graphs: last 24 hours, last 7 days and last 30 days. The data looks like this: { "data" : 278, "date" : ISODate("2016-08-31T01:51:05.315Z") } { "data" : 627, "date" : ISODate("2016-08-31T01:51:06.361Z") } { "data" : 146, "date" : ISODate("2016-08-31T01:51:07.938Z") } // etc For the 24h graph I simply output the data for the last 24h, that's easy. For the other graphs I thin the data: const data = {}; //data from database let newData = []; const interval = 7; //for 7 days the interval is 7, for 30 days it's 30 for( let i = 0; i < data.length; i += interval ) { newData.push( data[ i ] ); }; This works fine but extreme events where data is 0 or differs greatly from the other values average, can be lost depending on what time you search the data. Not thinning out the data however will result in a large sum of data points that are sent over the pipe and have to be processed on the front end. I'd like to avoid that. Now to my question How can I reduce the data for a 7 day period while keeping extremes in it? What's the most efficient way here? Additions: In essence I think I'm trying to simplify a graph to reduce points but keep the overall shape. (If you look at it from a pure image perspective) Something like an implementation of Douglas–Peucker algorithm in node?
As you mention in the comments, the Ramer-Douglas-Peucker (RDP) algorithm is used to process data points in 2D figures but you want to use it for graph data where X values are fixed. I modified this Javascript implementation of the algorithm provided by M Oehm to consider only the vertical (Y) distance in the calculations. On the other hand, data smoothing is often suggested to reduce the number of data points in a graph (see this post by csgillespie). In order to compare the two methods, I made a small test program. The Reset button creates new test data. An algorithm can be selected and applied to obtain a reduced number of points, separated by the specified interval. In the case of the RDP algorithm however, the resulting points are not evenly spaced. To get the same number of points as for the specified interval, I run the calculations iteratively, adjusting the espilon value each time until the correct number of points is reached. From my tests, the RDP algorithm gives much better results. The only downside is that the spacing between points varies. I don't think that this can be avoided, given that we want to keep the extreme points which are not evenly distributed in the original data. Here is the code snippet, which is better seen in Full Page mode: var svgns = 'http://www.w3.org/2000/svg'; var graph = document.getElementById('graph1'); var grpRawData = document.getElementById('grpRawData'); var grpCalculatedData = document.getElementById('grpCalculatedData'); var btnReset = document.getElementById('btnReset'); var cmbMethod = document.getElementById('cmbMethod'); var btnAddCalculated = document.getElementById('btnAddCalculated'); var btnClearCalculated = document.getElementById('btnClearCalculated'); var data = []; var calculatedCount = 0; var colors = ['black', 'red', 'green', 'blue', 'orange', 'purple']; var getPeriod = function () { return parseInt(document.getElementById('txtPeriod').value, 10); }; var clearGroup = function (grp) { while (grp.lastChild) { grp.removeChild(grp.lastChild); } }; var showPoints = function (grp, pts, markerSize, color) { var i, point; for (i = 0; i < pts.length; i++) { point = pts[i]; var marker = document.createElementNS(svgns, 'circle'); marker.setAttributeNS(null, 'cx', point.x); marker.setAttributeNS(null, 'cy', point.y); marker.setAttributeNS(null, 'r', markerSize); marker.setAttributeNS(null, 'fill', color); grp.appendChild(marker); } }; // Create and display test data var showRawData = function () { var i, x, y; var r = 0; data = []; for (i = 1; i < 500; i++) { x = i; r += 15.0 * (Math.random() * Math.random() - 0.25); y = 150 + 30 * Math.sin(x / 200) * Math.sin((x - 37) / 61) + 2 * Math.sin((x - 7) / 11) + r; data.push({ x: x, y: y }); } showPoints(grpRawData, data, 1, '#888'); }; // Gaussian kernel smoother var createGaussianKernelData = function () { var i, x, y; var r = 0; var result = []; var period = getPeriod(); for (i = Math.floor(period / 2) ; i < data.length; i += period) { x = data[i].x; y = gaussianKernel(i); result.push({ x: x, y: y }); } return result; }; var gaussianKernel = function (index) { var halfRange = Math.floor(getPeriod() / 2); var distance, factor; var totalValue = 0; var totalFactor = 0; for (i = index - halfRange; i <= index + halfRange; i++) { if (0 <= i && i < data.length) { distance = Math.abs(i - index); factor = Math.exp(-Math.pow(distance, 2)); totalFactor += factor; totalValue += data[i].y * factor; } } return totalValue / totalFactor; }; // Ramer-Douglas-Peucker algorithm var ramerDouglasPeuckerRecursive = function (pts, first, last, eps) { if (first >= last - 1) { return [pts[first]]; } var slope = (pts[last].y - pts[first].y) / (pts[last].x - pts[first].x); var x0 = pts[first].x; var y0 = pts[first].y; var iMax = first; var max = -1; var p, dy; // Calculate vertical distance for (var i = first + 1; i < last; i++) { p = pts[i]; y = y0 + slope * (p.x - x0); dy = Math.abs(p.y - y); if (dy > max) { max = dy; iMax = i; } } if (max < eps) { return [pts[first]]; } var p1 = ramerDouglasPeuckerRecursive(pts, first, iMax, eps); var p2 = ramerDouglasPeuckerRecursive(pts, iMax, last, eps); return p1.concat(p2); } var internalRamerDouglasPeucker = function (pts, eps) { var p = ramerDouglasPeuckerRecursive(data, 0, pts.length - 1, eps); return p.concat([pts[pts.length - 1]]); } var createRamerDouglasPeuckerData = function () { var finalPointCount = Math.round(data.length / getPeriod()); var epsilon = getPeriod(); var pts = internalRamerDouglasPeucker(data, epsilon); var iteration = 0; // Iterate until the correct number of points is obtained while (pts.length != finalPointCount && iteration++ < 20) { epsilon *= Math.sqrt(pts.length / finalPointCount); pts = internalRamerDouglasPeucker(data, epsilon); } return pts; }; // Event handlers btnReset.addEventListener('click', function () { calculatedCount = 0; clearGroup(grpRawData); clearGroup(grpCalculatedData); showRawData(); }); btnClearCalculated.addEventListener('click', function () { calculatedCount = 0; clearGroup(grpCalculatedData); }); btnAddCalculated.addEventListener('click', function () { switch (cmbMethod.value) { case "Gaussian": showPoints(grpCalculatedData, createGaussianKernelData(), 2, colors[calculatedCount++]); break; case "RDP": showPoints(grpCalculatedData, createRamerDouglasPeuckerData(), 2, colors[calculatedCount++]); return; } }); showRawData(); div { margin-bottom: 6px; } <div> <button id="btnReset">Reset</button> <select id="cmbMethod"> <option value="RDP">Ramer-Douglas-Peucker</option> <option value="Gaussian">Gaussian kernel</option> </select> <label for="txtPeriod">Interval: </label> <input id="txtPeriod" type="text" style="width: 36px;" value="7" /> </div> <div> <button id="btnAddCalculated">Add calculated points</button> <button id="btnClearCalculated">Clear calculated points</button> </div> <svg id="svg1" width="765" height="450" viewBox="0 0 510 300"> <g id="graph1" transform="translate(0,300) scale(1,-1)"> <rect width="500" height="300" stroke="black" fill="#eee"></rect> <g id="grpRawData"></g> <g id="grpCalculatedData"></g> </g> </svg>
Generate random coordinates (excluding some specific ones)
I have a multidimensional array, which I'm using as a very simple coordinate system. To generate random coordinates, I came up with this very simple function: var coords = [ [1,0,0,1,0,0,0,0,1,0,0,0,1,1,0,1,1,1,1,1,1,1,0,1], [0,0,0,1,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,1], [1,0,1,1,1,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,1], [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,1,1], [1,1,1,0,1,1,0,0,1,1,0,1,1,1,1,1,1,0,0,1,1,0,1,1], [1,1,1,0,1,1,0,0,1,1,0,1,1,1,1,0,0,0,0,1,1,0,1,1], [0,0,0,0,1,1,0,0,1,1,0,1,1,1,1,0,0,1,0,1,1,0,1,1], [1,0,1,0,1,1,1,1,0,0,0,1,1,1,0,0,0,1,0,1,1,0,1,1] ]; function getRandomInt( min, max ) { return Math.floor( Math.random() * (max - min + 1) ) + min; } function randomCoords() { var x, y; do { x = getRandomInt( 0, coords[ 0 ].length - 1 ); y = getRandomInt( 0, coords.length - 1 ); } while ( coords[ y ][ x ] !== 1 ); return [ x, y ]; } As you might see, I only want to get random coordinates that are 1 in my array. Although this is working, I was wondering if there's a better / more effective way to do it? Sometimes (especially if there are lots of 0s in my coordinate system) it takes a bit to return a value. In that time (as far as I know) javascript can't do anything else... so everything will just pause...
If you are looking to get a random coordinate only once or twice, then your solution is the best. If you use it often, you can put the coordinates of the 1's in an array. So you will only have to use random() once on the array coordPairs1[Math.floor(Math.random() * coordPairs1.length)] var coords = [ [1,0,0,1,0,0,0,0,1,0,0,0,1,1,0,1,1,1,1,1,1,1,0,1], [0,0,0,1,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,1], [1,0,1,1,1,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,1], [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,1,1], [1,1,1,0,1,1,0,0,1,1,0,1,1,1,1,1,1,0,0,1,1,0,1,1], [1,1,1,0,1,1,0,0,1,1,0,1,1,1,1,0,0,0,0,1,1,0,1,1], [0,0,0,0,1,1,0,0,1,1,0,1,1,1,1,0,0,1,0,1,1,0,1,1], [1,0,1,0,1,1,1,1,0,0,0,1,1,1,0,0,0,1,0,1,1,0,1,1] ]; // make coord-pairs: var coordPairs1 = [] for(var x=0; x<coords[0].length; ++x) { for(var y=0; y<coords.length; ++y) { if(coords[y][x] == 1) coordPairs1.push([x,y]) } } function randomCoords() { return coordPairs1[Math.floor(Math.random() * coordPairs1.length)] } // Example: document.body.innerHTML = randomCoords()
is there a JavaScript implementation of the Inverse Error Function, akin to MATLAB erfinv()?
is there a JavaScript implementation of the Inverse Error Function? This would implement the Gauss inverse error function. Approximations are ok.
Why yes. There is. The following code uses built-in JavaScript functions and implments Abramowitz and Stegun's algorithm as described here: function erfinv(x){ var z; var a = 0.147; var the_sign_of_x; if(0==x) { the_sign_of_x = 0; } else if(x>0){ the_sign_of_x = 1; } else { the_sign_of_x = -1; } if(0 != x) { var ln_1minus_x_sqrd = Math.log(1-x*x); var ln_1minusxx_by_a = ln_1minus_x_sqrd / a; var ln_1minusxx_by_2 = ln_1minus_x_sqrd / 2; var ln_etc_by2_plus2 = ln_1minusxx_by_2 + (2/(Math.PI * a)); var first_sqrt = Math.sqrt((ln_etc_by2_plus2*ln_etc_by2_plus2)-ln_1minusxx_by_a); var second_sqrt = Math.sqrt(first_sqrt - ln_etc_by2_plus2); z = second_sqrt * the_sign_of_x; } else { // x is zero z = 0; } return z; }
function provided earlier in this post did not work for me... NaN result on a 33meter circle with confidence 65% represented as 65.0 ... I wrote the following based on an equation listed here https://en.wikipedia.org/wiki/Error_function#Inverse_functions and it worked fine: var _a = ((8*(Math.PI - 3)) / ((3*Math.PI)*(4 - Math.PI))); function erfINV( inputX ) { var _x = parseFloat(inputX); var signX = ((_x < 0) ? -1.0 : 1.0 ); var oneMinusXsquared = 1.0 - (_x * _x); var LNof1minusXsqrd = Math.log( oneMinusXsquared ); var PI_times_a = Math.PI * _a ; var firstTerm = Math.pow(((2.0 / PI_times_a) + (LNof1minusXsqrd / 2.0)), 2); var secondTerm = (LNof1minusXsqrd / _a); var thirdTerm = ((2 / PI_times_a) + (LNof1minusXsqrd / 2.0)); var primaryComp = Math.sqrt( Math.sqrt( firstTerm - secondTerm ) - thirdTerm ); var scaled_R = signX * primaryComp ; return scaled_R ; }
Here's an alternative implementation of Abramowitz and Stegun's algorithm (equivalent to ptmalcolm's answer, but more succinct and twice as fast): function erfinv(x) { // maximum relative error = .00013 const a = 0.147 //if (0 == x) { return 0 } const b = 2/(Math.PI * a) + Math.log(1-x**2)/2 const sqrt1 = Math.sqrt( b**2 - Math.log(1-x**2)/a ) const sqrt2 = Math.sqrt( sqrt1 - b ) return sqrt2 * Math.sign(x) } You can test the speed with console.time("erfinv"); for (let i=0; i<1000000000; i++) {erfinv(i/1000000000)}; console.timeEnd("erfinv") The if statement optimization is commented out as it doesn't seem to make a difference - presumably the interpreter recognizes that this is all one equation. If you need a more accurate approximation, check out Wikipedia.