I went throught http://www.youtube.com/watch?v=mHtdZgou0qU speed up your javascript.
So i did this personal speed test:
var count = 50000000;
var testDummy;
// test 1
testDummy = 0;
var test1Start = new Date().getTime();
var i;
for (i=0;i<count;i++) {
testDummy++;
}
var test1End = new Date().getTime();
var test1Total = (test1End-test1Start);
// test 2
testDummy = 0;
var test2Start = new Date().getTime();
var i
for (i=count; i--;) {
testDummy++;
}
var test2End = new Date().getTime();
var test2Total = (test2End-test2Start);
debug(
"test1\n" +
"total: " + test1Total + "\n" +
"test2\n" +
"total: " + test2Total
);
I get not significant results, like sometimes they are even and sometimes not.
My question is, if i use for loop like this: "for(i=count;i--;)" is it really faster ?
Am i doing something wrong in my tests.
Thanks for your help!
(I'd write this as a comment, but it'd be too long.)
First: Worrying about the efficiency of a for loop is almost always a waste of (your own) time. What's inside of the loop usually has much more impact on performance than the details of how the loop is specified.
Second: What browser(s) did you test with? Different browsers will show different performance profiles; even different versions of the same browser will differ.
Third: It's not out of the question that the JavaScript engine optimized your loops out of the picture. A JavaScript compiler could simply look at the loop and decide to set testDummy to 50000000 and be done with it.
Fourth: If you really want to split hairs on performance, I'd try for(i=count; --i != 0;) as well as for(i=count;i--;). The former may save a machine instruction or two, because executing the subtraction (in the predecrement step) may automatically set a hardware flag indicating that the result was 0. That flag is potentially wasted when you're using the postdecrement operator, because it wouldn't be examined until the start of the next iteration. (The chances that you'd be able to notice the difference are slim to none.)
Well...
for( i=0 ; i < len ; i++ )
is practically the same as
for( i = len ; i-- ; )
Lets describe it:
case 1:
let i be 0
boolean expression
let i be i + 1
case 2:
let i be len
let i be i - 1
cast i to boolean (type coersion) and interpret it.
The difference should be minute and depends entirely on how efficient type coersion is compared to a normal boolean expression.
Incidentally, test this:]
var i = count;
while( i-- ) {}
There's nothing wrong with your tests.
The blocks that you are testing are very-near identical, meaning the difference in execution speeds are going to be trivial. In both examples, a variable (i) is set to a fixed value and looped until it reaches a fixed value (count). The only thing that differs is i++ and i--, which in terms of speed, I think are practically the same.
The thing you have to be careful of (not do) is calculate the "loop until" value inside the loop definition.
I have made some tests too, here are the results.
In many articles, books authors propose that "optimized" loops are faster.
It seems that modern browsers have some optimizations for "normal" loops.
Firefox 13.0.1
Normal Loop: 0.887
Opt1: 1.025
Opt2: 1.098
Opt3: 1.399
Chrome 19.0.1
Normal Loop: 3.349
Opt1: 3.12
Opt2: 3.109
Opt3: 3.095
IE8
Over 12sec...
Repeatedly crashed during tests.
<script type="text/javascript">
function p(p) { console.log(p); }
// function p(p) { document.write(p); }
var testFn = function(num, niz, fn) {
var start = new Date().getTime();
fn(num, niz);
var result = (new Date().getTime() - start) / 1000;
return result;
}
function normalLoop(num, niz) {
for (var i = 0; i < niz.length; i++) {
niz[i] = 'a' + i;
}
}
function opt1(num, niz) {
var len = niz.length;
for (var i = 0; i < len; i++) {
niz[i] = 'a' + i;
}
}
function opt2(num, niz) {
for (var i = niz.length; i--;) {
niz[i] = 'a' + i;
}
}
function opt3(num, niz) {
while(i--) {
niz[i] = 'a' + i;
}
}
var niz = [];
var num = 10000000;
for (var i = 0; i < num; i++) { niz.push(i); };
p('Normal Loop: ' + testFn(num, niz, normalLoop));
p('Opt1: ' + testFn(num, niz, opt1));
p('Opt2: ' + testFn(num, niz, opt2));
p('Opt3: ' + testFn(num, niz, opt3));
</script>
Related
I wonder if there's any performance difference when calling the function foo() between foo(123) and window["foo"](123).
Well, 10 loops by 100 million operations gave this (chrome):
You can notice, that at the 2-nd line it makes some optimizations and starts running faster. Other differences can be neglected.
for( let j = 0; j < 10; j++ ) {
let start1 = performance.now();
let x1 = 0;
for( let i = 0; i < 1e8; i++ ) {
x1 += foo();
}
let end1 = ( performance.now() - start1 ).toFixed(10);
/***/
let start2 = performance.now();
let x2 = 0;
for( let i = 0; i < 1e8; i++ ) {
x2 += window["foo"]();
}
let end2 = ( performance.now() - start2 ).toFixed(10);
console.log("foo():", end1, " // window: ", end2);
}
/***/
function foo() {
return 1;
}
.as-console-wrapper { max-height: 100vh !important; }
I genuinely expected there to be no difference, but I am seeing the test for NO_WINDOW take 2x the time as WINDOW_STRING or WINDOW_PROPERTY for this simple addition test:
add = (a,b) => a+b;
console.time('NO_WINDOW');
for (var i = 0; i < 1000; i++);
add(i, i+1);
console.timeEnd('NO_WINDOW');
console.time('WINDOW_STRING');
for (var i = 0; i < 1000; i++);
window['add'](i, i+1);
console.timeEnd('WINDOW_STRING');
console.time('WINDOW_PROPERTY');
for (var i = 0; i < 1000; i++);
window.add(i, i+1);
console.timeEnd('WINDOW_PROPERTY');
EDIT: Phil pointed out in the comments that this is seems to be a weird issue with console.timeEnd, where the first call always takes longer than the subsequent ones.
Reproducing with performance.now() and proper benchmarking logic shows no meaningful difference in performance:
add = (a,b) => a+b;
avg = (arr) => arr.reduce(add) / arr.length;
let noWindow = [],
windowProp = [],
windowString = [],
n = 1e6,
start;
start = performance.now();
for (var i = 0; i < n; i++)
window.add(i, i+1);
windowProp.push(performance.now() - start);
start = performance.now();
for (var i = 0; i < n; i++)
add(i, i+1);
noWindow.push(performance.now() - start);
start = performance.now();
for (var i = 0; i < n; i++)
window['add'](i, i+1);
windowString.push(performance.now() - start);
let avgs = [
avg(noWindow),
avg(windowString),
avg(windowProp)
];
console.log(avgs);
Assuming they're equivalent references, it would seem like the only difference would be in how the variable is read. There have been known to be performance differences in browsers at different points in history when reading globals or object properties using square brackets.
If there is such a difference, it would be negligible, and almost never an issue.
Only thing I would wonder is if some browser loses optimizations for dynamically accessed keys and/or for globals in general. It being a global, and using syntax that dynamically accesses the property, there could be a chance that the function would not get full optimization by the engine.
Again, probably not something that should concern you, unless it's a very heavy function, and quite expensive to run in general.
Depending on how you use it, yes or no. Essentially, there's no difference used once in a program. Repeated millions of times, there's a difference, but it depends on where, how, and how many times, that code is executed. It's quite rare you'd look for pure performance, however.
Depending on the context in which you call it, it is the same thing. In a browser, essentially you can access any of the keys in the window object without directly referencing it. So any of these are the same:
foo(123)
window["foo"](123)
window.foo(123)
Read more about the window object and global functions here.
Regardless of functional differences, does using the new keywords 'let' and 'const' have any generalized or specific impact on performance relative to 'var'?
After running the program:
function timeit(f, N, S) {
var start, timeTaken;
var stats = {min: 1e50, max: 0, N: 0, sum: 0, sqsum: 0};
var i;
for (i = 0; i < S; ++i) {
start = Date.now();
f(N);
timeTaken = Date.now() - start;
stats.min = Math.min(timeTaken, stats.min);
stats.max = Math.max(timeTaken, stats.max);
stats.sum += timeTaken;
stats.sqsum += timeTaken * timeTaken;
stats.N++
}
var mean = stats.sum / stats.N;
var sqmean = stats.sqsum / stats.N;
return {min: stats.min, max: stats.max, mean: mean, spread: Math.sqrt(sqmean - mean * mean)};
}
var variable1 = 10;
var variable2 = 10;
var variable3 = 10;
var variable4 = 10;
var variable5 = 10;
var variable6 = 10;
var variable7 = 10;
var variable8 = 10;
var variable9 = 10;
var variable10 = 10;
function varAccess(N) {
var i, sum;
for (i = 0; i < N; ++i) {
sum += variable1;
sum += variable2;
sum += variable3;
sum += variable4;
sum += variable5;
sum += variable6;
sum += variable7;
sum += variable8;
sum += variable9;
sum += variable10;
}
return sum;
}
const constant1 = 10;
const constant2 = 10;
const constant3 = 10;
const constant4 = 10;
const constant5 = 10;
const constant6 = 10;
const constant7 = 10;
const constant8 = 10;
const constant9 = 10;
const constant10 = 10;
function constAccess(N) {
var i, sum;
for (i = 0; i < N; ++i) {
sum += constant1;
sum += constant2;
sum += constant3;
sum += constant4;
sum += constant5;
sum += constant6;
sum += constant7;
sum += constant8;
sum += constant9;
sum += constant10;
}
return sum;
}
function control(N) {
var i, sum;
for (i = 0; i < N; ++i) {
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
}
return sum;
}
console.log("ctl = " + JSON.stringify(timeit(control, 10000000, 50)));
console.log("con = " + JSON.stringify(timeit(constAccess, 10000000, 50)));
console.log("var = " + JSON.stringify(timeit(varAccess, 10000000, 50)));
.. My results were the following:
ctl = {"min":101,"max":117,"mean":108.34,"spread":4.145407097016924}
con = {"min":107,"max":572,"mean":435.7,"spread":169.4998820058587}
var = {"min":103,"max":608,"mean":439.82,"spread":176.44417700791374}
However discussion as noted here seems to indicate a real potential for performance differences under certain scenarios: https://esdiscuss.org/topic/performance-concern-with-let-const
TL;DR
In theory, an unoptimized version of this loop:
for (let i = 0; i < 500; ++i) {
doSomethingWith(i);
}
might be slower than an unoptimized version of the same loop with var:
for (var i = 0; i < 500; ++i) {
doSomethingWith(i);
}
because a different i variable is created for each loop iteration with let, whereas there's only one i with var.
Arguing against that is the fact the var is hoisted so it's declared outside the loop whereas the let is only declared within the loop, which may offer an optimization advantage.
In practice, here in 2018, modern JavaScript engines do enough introspection of the loop to know when it can optimize that difference away. (Even before then, odds are your loop was doing enough work that the additional let-related overhead was washed out anyway. But now you don't even have to worry about it.)
Beware synthetic benchmarks as they are extremely easy to get wrong, and trigger JavaScript engine optimizers in ways that real code doesn't (both good and bad ways). However, if you want a synthetic benchmark, here's one:
const now = typeof performance === "object" && performance.now
? performance.now.bind(performance)
: Date.now.bind(Date);
const btn = document.getElementById("btn");
btn.addEventListener("click", function() {
btn.disabled = true;
runTest();
});
const maxTests = 100;
const loopLimit = 50000000;
const expectedX = 1249999975000000;
function runTest(index = 1, results = {usingVar: 0, usingLet: 0}) {
console.log(`Running Test #${index} of ${maxTests}`);
setTimeout(() => {
const varTime = usingVar();
const letTime = usingLet();
results.usingVar += varTime;
results.usingLet += letTime;
console.log(`Test ${index}: var = ${varTime}ms, let = ${letTime}ms`);
++index;
if (index <= maxTests) {
setTimeout(() => runTest(index, results), 0);
} else {
console.log(`Average time with var: ${(results.usingVar / maxTests).toFixed(2)}ms`);
console.log(`Average time with let: ${(results.usingLet / maxTests).toFixed(2)}ms`);
btn.disabled = false;
}
}, 0);
}
function usingVar() {
const start = now();
let x = 0;
for (var i = 0; i < loopLimit; i++) {
x += i;
}
if (x !== expectedX) {
throw new Error("Error in test");
}
return now() - start;
}
function usingLet() {
const start = now();
let x = 0;
for (let i = 0; i < loopLimit; i++) {
x += i;
}
if (x !== expectedX) {
throw new Error("Error in test");
}
return now() - start;
}
<input id="btn" type="button" value="Start">
It says that there's no significant difference in that synthetic test on either V8/Chrome or SpiderMonkey/Firefox. (Repeated tests in both browsers have one winning, or the other winning, and in both cases within a margin of error.) But again, it's a synthetic benchmark, not your code. Worry about the performance of your code when and if your code has a performance problem.
As a style matter, I prefer let for the scoping benefit and the closure-in-loops benefit if I use the loop variable in a closure.
Details
The important difference between var and let in a for loop is that a different i is created for each iteration; it addresses the classic "closures in loop" problem:
function usingVar() {
for (var i = 0; i < 3; ++i) {
setTimeout(function() {
console.log("var's i: " + i);
}, 0);
}
}
function usingLet() {
for (let i = 0; i < 3; ++i) {
setTimeout(function() {
console.log("let's i: " + i);
}, 0);
}
}
usingVar();
setTimeout(usingLet, 20);
Creating the new EnvironmentRecord for each loop body (spec link) is work, and work takes time, which is why in theory the let version is slower than the var version.
But the difference only matters if you create a function (closure) within the loop that uses i, as I did in that runnable snippet example above. Otherwise, the distinction can't be observed and can be optimized away.
Here in 2018, it looks like V8 (and SpiderMonkey in Firefox) is doing sufficient introspection that there's no performance cost in a loop that doesn't make use of let's variable-per-iteration semantics. See this test.
In some cases, const may well provide an opportunity for optimization that var wouldn't, especially for global variables.
The problem with a global variable is that it's, well, global; any code anywhere could access it. So if you declare a variable with var that you never intend to change (and never do change in your code), the engine can't assume it's never going to change as the result of code loaded later or similar.
With const, though, you're explicitly telling the engine that the value cannot change¹. So it's free to do any optimization it wants, including emitting a literal instead of a variable reference to code using it, knowing that the values cannot be changed.
¹ Remember that with objects, the value is a reference to the object, not the object itself. So with const o = {}, you could change the state of the object (o.answer = 42), but you can't make o point to a new object (because that would require changing the object reference it contains).
When using let or const in other var-like situations, they're not likely to have different performance. This function should have exactly the same performance whether you use var or let, for instance:
function foo() {
var i = 0;
while (Math.random() < 0.5) {
++i;
}
return i;
}
It's all, of course, unlikely to matter and something to worry about only if and when there's a real problem to solve.
"LET" IS BETTER IN LOOP DECLARATIONS
With a simple test (5 times) in navigator like that:
// WITH VAR
console.time("var-time")
for(var i = 0; i < 500000; i++){}
console.timeEnd("var-time")
The mean time to execute is more than 2.5ms
// WITH LET
console.time("let-time")
for(let i = 0; i < 500000; i++){}
console.timeEnd("let-time")
The mean time to execute is more than 1.5ms
I found that loop time with let is better.
T.J. Crowder's answer is so excellent.
Here is an addition of: "When would I get the most bang for my buck on editing existing var declarations to const ?"
I've found that the most performance boost had to do with "exported" functions.
So if file A, B, R, and Z are calling on a "utility" function in file U that is commonly used through your app, then switching that utility function over to "const" and the parent file reference to a const can eak out some improved performance. It seemed for me that it wasn't measurably faster, but the overall memory consumption was reduced by about 1-3% for my grossly monolithic Frankenstein-ed app. Which if you're spending bags of cash on the cloud or your baremetal server, could be a good reason to spend 30 minutes to comb through and update some of those var declarations to const.
I realize that if you read into how const, var, and let work under the covers you probably already concluded the above... but in case you "glanced" over it :D.
From what I remember of the benchmarking on node v8.12.0 when I was making the update, my app went from idle consumption of ~240MB RAM to ~233MB RAM.
T.J. Crowder's answer is very good but :
'let' is made to make code more readable, not more powerful
by theory let will be slower than var
by practice the compiler can not solve completely (static analysis) an uncompleted program so sometime it will miss the optimization
in any-case using 'let' will require more CPU for introspection, the bench must be started when google v8 starts to parse
if introspection fails 'let' will push hard on the V8 garbage collector, it will require more iteration to free/reuse. it will also consume more RAM. the bench must take these points into account
Google Closure will transform let in var...
The effect of the performance gape between var and let can be seen in real-life complete program and not on a single basic loop.
Anyway, to use let where you don't have to, makes your code less readable.
Just did some more tests, Initially I concluded that there is a substantial difference in favor of var. My results initially showed that between Const / Let / Var there was a ratio from 4 / 4 / 1 to 3 / 3 / 1 in execution time.
After Edit in 29/01/2022 (according to jmrk's remark to remove global variables in let and const tests) now results seem similar 1 / 1 / 1.
I give the code used below. Just let me mention that I started from the code of AMN and did lots of tweaking, and editing.
I did the tests both in w3schools_tryit editor and in Google_scripts
My Notes:
In GoogleScripts there seems that the 1st test ALWAYS takes longer, no-matter which one, especially for reps<5.000.000 and before separating them in individual functions
For Reps < 5.000.000 JS engine optimizations are all that matters, results go up and down without safe conclusions
GoogleScripts constantly does ~1.5x time longer, I think it is expected
There was a BIG difference when all tests where separated in individual functions, execution speed was at-least doubled and 1st test's delay almost vanished!
Please don't judge the code, I did try but don't pretend to be any expert in JS.
I would be delighted to see your tests and opinions.
function mytests(){
var start = 0;
var tm1=" Const: ", tm2=" Let: ", tm3=" Var: ";
start = Date.now();
tstLet();
tm2 += Date.now() - start;
start = Date.now();
tstVar();
tm3 += Date.now() - start;
start = Date.now();
tstConst();
tm1 += (Date.now() - start);
var result = "TIMERS:" + tm1 + tm2 + tm3;
console.log(result);
return result;
}
// with VAR
function tstVar(){
var lmtUp = 50000000;
var i=0;
var item = 2;
var sum = 0;
for(i = 0; i < lmtUp; i++){sum += item;}
item = sum / 1000;
}
// with LET
function tstLet(){
let lmtUp = 50000000;
let j=0;
let item = 2;
let sum=0;
for( j = 0; j < lmtUp; j++){sum += item;}
item = sum/1000;
}
// with CONST
function tstConst(){
const lmtUp = 50000000;
var k=0;
const item = 2;
var sum=0;
for( k = 0; k < lmtUp; k++){sum += item;}
k = sum / 1000;
}
code with 'let' will be more optimized than 'var' as variables declared with var do not get cleared when the scope expires but variables declared with let does. so var uses more space as it makes different versions when used in a loop.
I am trying to swap the data within these arrays.
My data will look something like this. In production this array can and will be several times bigger.
var data = [
[13.418946862220764, 52.50055852688439],
[13.419011235237122, 52.50113000479732],
[13.419756889343262, 52.50171780290061],
[13.419885635375975, 52.50237416816131],
[13.420631289482117, 52.50294888790448]
]
Currently my switching code looks like the below.
var temp;
for(var i = 0;i < data.length;i++) {
temp = array[i][0];
array[i][0] = array[i][1];
array[i][1] = temp;
}
What I am trying to figure out is if this the most efficient way to do this and/or if any improvements are possible.
Please understand that even the slightest improvement will matter.
I would use a more functional approach:
var switched = data.map(function (arr) {
return [arr[1], arr[0]];
});
If you use ES2015, you can even do that in one line:
const switched = data.map((arr) => [arr[1], arr[0]]);
If you want to stick with a loop:
for(var i = 0; i < data.length; i++) {
data[i] = [data[i][1], data[i][0]];
}
You code looks perfectly fine, and you don't need any further "optimization".
As always, a benchmark is always the good way to find out who is faster:
var arr = (function() {
var res = [];
for(var i = 0; i < 100000; ++i) {
res[i] = [Math.random(), Math.random()];
}
return res;
}());
var swap_in_place = function() {
for(var i = 0; i < arr.length; ++i) {
var tmp = arr[i][0];
arr[i][0] = arr[i][1];
arr[i][1] = tmp;
}
};
var swap_map = function() {
arr = arr.map(function(elem) {return [elem[1], elem[0]]; });
};
var runBench = function(name, f) {
var start = new Date().getTime();
for(var i = 0; i < 50; ++i) {
f();
}
var stop = new Date().getTime();
console.log(name + " took: " + (stop - start));
};
runBench("in_place", swap_in_place);
runBench("map", swap_map);
in my firefox latest (windows 10 x64), I get (quite consistently) 16 for in place, vs 350 for map version, meaning you get a 20x speed down by using map instead of your own version.
You might think this is due to the fact that this snippet is embedded in a iframe and so on, so I ran it in node (4.5.0), which is built on top of V8, and I get the same results:
I think that the Jitter can't be smart enough to properly inline the function in the map version, or deduce it operates on the same memory without side effect. Thefore, the Jitter has to allocate a full new array to store the intermediate results, then loop over it with a function call (meaning register save/restore stall at each iteration), and then either:
copy back the entire data to arr
move the reference (probably what happens), but the garbage collector has to collect the entire temporary array.
The map function might also trigger reallocation of the temporary, which is extremely expensive.
Sorry that I didn't know how to properly phrase the question, but here's the issue:
var o = {
my: {
very: {
deep: {
sub: {
fn(x) {
return x + 1;
}
}
}
}
}
};
var n = 0;
//without short-hand function
var timeStart = performance.now();
for (var i = 0; i < 5000000; i++) {
n += o.my.very.deep.sub.fn(i);
}
var timeEnd = performance.now();
console.log(timeEnd - timeStart);
n = 0;
//with short-hand function
var fn = o.my.very.deep.sub.fn;
timeStart = performance.now();
for (var i = 0; i < 5000000; i++) {
n += fn(i);
}
timeEnd = performance.now();
console.log(timeEnd - timeStart);
The question is: Why is the code in the first loop 5-7% faster (when executed in global scope), though it has to repeatedly go down the sub objects, while in the second loop, it uses the short-hand fn instead?
PS: If you find a better question title, feel free to edit.
var o = {
my: {
very: {
deep: {
sub: {
fn(x) {
return x + 1;
}
}
}
}
}
};
var n = 0;
//without short-hand function
var timeStart = performance.now();;
for (var i = 0; i < 5000000; i++) {
n += o.my.very.deep.sub.fn(i);
}
var timeEnd = performance.now();;
console.log(timeEnd - timeStart);
n = 0;
//with short-hand function
var fn = o.my.very.deep.sub.fn;
timeStart = performance.now();
for (var i = 0; i < 5000000; i++) {
n += fn(i);
}
timeEnd = performance.now();
console.log(timeEnd - timeStart);
Running this (note the usage of performance.now() instead of Date) the seems to be a very small margin of difference between the two. On some runs the first implementation will be (very very slightly) ahead, in some the latter. Without an isolated test environment it might as well be as good as you can get but, in essence, this level of testing will show you that there is very little, if any, difference between the two approaches.
The second one is faster because you are not doing property lookup on each iteration.
on the first one, you are saying go to o then my then very then deep then sub then execute the function o.my.very.deep.sub.fn(i) for every iteration
but on the second one you are caching the fn up top with var fn = o.my.very.deep.sub.fn; now it does not have to do the lookup for fn in each iteration, avoiding extra work
some engines like v8 do inline caching for optimizations, they might not even do dynamic lookups for each iteration but it all depends on the implementation. check this to understand how v8 implements it. fast property access also how v8 optimizes property access but the takeaway is that
V8 is generally pretty optimistic about your code and tries to speed
it up as much as it can. But sometimes the assumptions that it makes
are not valid (a hidden class wasn't the one expected). In this case,
V8 will replace Inline Cache fast path code with full non-optimized
code.
A simple implementation for reversing an array is twice as fast compared to the built in function in Javascript, when tested in Chrome. What's V8 doing? Here is the test:
var newArr = [];
var newArrDefault = [];
for(var i = 0; i < 10000000; i++){
newArr[i] = i;
newArrDefault[i] = i;
}
var startDefault = new Date();
newArrDefault.reverse();
console.log("Built in method took " + (new Date().getTime() - startDefault.getTime()));
var start = new Date();
for(var i = 0; i < newArr.length / 2; i++){
var tmp = newArr[i];
newArr[i] = newArr[newArr.length-i-1];
newArr[newArr.length-i-1] = tmp;
}
console.log("Custom method took " + (new Date().getTime() - start.getTime()));
Results on Version 20.0.1132.47 Ubuntu 12.04 (144678):
Built in method took 149
Custom method took 71
For the fun of it, I implemented the specification like so:
var upper, upperExists, lowerExists, lowerValue, upperValue;
for(var lower = 0, len = newArr.length >>> 0, middle = Math.floor(len / 2); lower != middle; ++lower) {
upper = len - lower - 1;
lowerValue = newArr[lower];
upperValue = newArr[upper];
lowerExists = newArr.hasOwnProperty(lower);
upperExists = newArr.hasOwnProperty(upper);
if (lowerExists && upperExists) {
newArr[lower] = upperValue;
newArr[upper] = lowerValue;
} else if (upperExists) {
newArr[lower] = upperValue;
delete newArr[upper];
} else if (lowerExists) {
newArr[upper] = lowerValue;
delete newArr[lower];
}
}
The jsperf can be found here.
It includes a whole bunch of code to deal with missing entries, which is why it's so much slower than both the native and your code (some optimizations may be possible, but it won't affect the performance enough). The performance difference between your code and the native implementation wasn't very conclusive though.
Under most circumstances arrays are a contiguous block of values with no gaps in between, so you should be safe with that kind of code; as long as you know the difference :)