Does JavaScript cache/optimize code if repeated multiple times? - javascript

I'm currently writing a small framework to test the speed of JavaScript functions. When I repeatedly call the same methods with the same parameter, it gives me strange results:
Function Execution Time
isEvenBitwise 38.00000000046566
isEvenModulo 38.00000000046566
isEvenPointless 38.00000000046566
Here are my functions:
var myFunctions =
{
isEvenBitwise: function(number)
{
return !(number & 1);
},
isEvenModulo: function(number)
{
return (number % 2 == 0);
},
isEvenPointless: function(number)
{
return 1;
}
}
The code that runs the functions:
PerformanceTest.prototype.measureTime = function()
{
for (var indexTests = 0; indexTests < this.testCount; indexTests++)
{
var results = [];
for (var currentFunction in this.functions) {
var contextFunction = this.functions[currentFunction];
var startTime = performance.now();
for (var i = 0; i < this.iterationsPerTest; i++)
{
var heh = contextFunction.apply(this, arguments)
}
var executionTime = performance.now() - startTime;
var result = {};
result.testName = currentFunction;
result.executionTime = executionTime;
results.push(result);
}
this.testResults.push(results);
}
}
Does the JavaScript interpreter cache/optimize my code? If so, how does it work? Or is there anything else happening I'm not aware of?
Edit: This seems to occur only in chrome, firefox works just fine with these results:
Function Execution Time
isEvenBitwise 9.652258097220447
isEvenModulo 37.546061799704376
isEvenPointless 8.512472488871936

After looking at your code I am going to make a guess that Chrome is being smart about what you are doing. It is seeing this:
var startTime = performance.now();
for (var i = 0; i < this.iterationsPerTest; i++)
{
var heh = contextFunction.apply(this, arguments)
}
var executionTime = performance.now() - startTime;
It is correctly assessing that the contextFunction has no side effects, recognising that the heh variable exists only within the loop scope and is never used and then optimising the entire loop away because it doesn't do anything.

Related

Are there any performance advantages to using "const" instead of "let" or "var" in JavaScript? [duplicate]

Regardless of functional differences, does using the new keywords 'let' and 'const' have any generalized or specific impact on performance relative to 'var'?
After running the program:
function timeit(f, N, S) {
var start, timeTaken;
var stats = {min: 1e50, max: 0, N: 0, sum: 0, sqsum: 0};
var i;
for (i = 0; i < S; ++i) {
start = Date.now();
f(N);
timeTaken = Date.now() - start;
stats.min = Math.min(timeTaken, stats.min);
stats.max = Math.max(timeTaken, stats.max);
stats.sum += timeTaken;
stats.sqsum += timeTaken * timeTaken;
stats.N++
}
var mean = stats.sum / stats.N;
var sqmean = stats.sqsum / stats.N;
return {min: stats.min, max: stats.max, mean: mean, spread: Math.sqrt(sqmean - mean * mean)};
}
var variable1 = 10;
var variable2 = 10;
var variable3 = 10;
var variable4 = 10;
var variable5 = 10;
var variable6 = 10;
var variable7 = 10;
var variable8 = 10;
var variable9 = 10;
var variable10 = 10;
function varAccess(N) {
var i, sum;
for (i = 0; i < N; ++i) {
sum += variable1;
sum += variable2;
sum += variable3;
sum += variable4;
sum += variable5;
sum += variable6;
sum += variable7;
sum += variable8;
sum += variable9;
sum += variable10;
}
return sum;
}
const constant1 = 10;
const constant2 = 10;
const constant3 = 10;
const constant4 = 10;
const constant5 = 10;
const constant6 = 10;
const constant7 = 10;
const constant8 = 10;
const constant9 = 10;
const constant10 = 10;
function constAccess(N) {
var i, sum;
for (i = 0; i < N; ++i) {
sum += constant1;
sum += constant2;
sum += constant3;
sum += constant4;
sum += constant5;
sum += constant6;
sum += constant7;
sum += constant8;
sum += constant9;
sum += constant10;
}
return sum;
}
function control(N) {
var i, sum;
for (i = 0; i < N; ++i) {
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
sum += 10;
}
return sum;
}
console.log("ctl = " + JSON.stringify(timeit(control, 10000000, 50)));
console.log("con = " + JSON.stringify(timeit(constAccess, 10000000, 50)));
console.log("var = " + JSON.stringify(timeit(varAccess, 10000000, 50)));
.. My results were the following:
ctl = {"min":101,"max":117,"mean":108.34,"spread":4.145407097016924}
con = {"min":107,"max":572,"mean":435.7,"spread":169.4998820058587}
var = {"min":103,"max":608,"mean":439.82,"spread":176.44417700791374}
However discussion as noted here seems to indicate a real potential for performance differences under certain scenarios: https://esdiscuss.org/topic/performance-concern-with-let-const
TL;DR
In theory, an unoptimized version of this loop:
for (let i = 0; i < 500; ++i) {
doSomethingWith(i);
}
might be slower than an unoptimized version of the same loop with var:
for (var i = 0; i < 500; ++i) {
doSomethingWith(i);
}
because a different i variable is created for each loop iteration with let, whereas there's only one i with var.
Arguing against that is the fact the var is hoisted so it's declared outside the loop whereas the let is only declared within the loop, which may offer an optimization advantage.
In practice, here in 2018, modern JavaScript engines do enough introspection of the loop to know when it can optimize that difference away. (Even before then, odds are your loop was doing enough work that the additional let-related overhead was washed out anyway. But now you don't even have to worry about it.)
Beware synthetic benchmarks as they are extremely easy to get wrong, and trigger JavaScript engine optimizers in ways that real code doesn't (both good and bad ways). However, if you want a synthetic benchmark, here's one:
const now = typeof performance === "object" && performance.now
? performance.now.bind(performance)
: Date.now.bind(Date);
const btn = document.getElementById("btn");
btn.addEventListener("click", function() {
btn.disabled = true;
runTest();
});
const maxTests = 100;
const loopLimit = 50000000;
const expectedX = 1249999975000000;
function runTest(index = 1, results = {usingVar: 0, usingLet: 0}) {
console.log(`Running Test #${index} of ${maxTests}`);
setTimeout(() => {
const varTime = usingVar();
const letTime = usingLet();
results.usingVar += varTime;
results.usingLet += letTime;
console.log(`Test ${index}: var = ${varTime}ms, let = ${letTime}ms`);
++index;
if (index <= maxTests) {
setTimeout(() => runTest(index, results), 0);
} else {
console.log(`Average time with var: ${(results.usingVar / maxTests).toFixed(2)}ms`);
console.log(`Average time with let: ${(results.usingLet / maxTests).toFixed(2)}ms`);
btn.disabled = false;
}
}, 0);
}
function usingVar() {
const start = now();
let x = 0;
for (var i = 0; i < loopLimit; i++) {
x += i;
}
if (x !== expectedX) {
throw new Error("Error in test");
}
return now() - start;
}
function usingLet() {
const start = now();
let x = 0;
for (let i = 0; i < loopLimit; i++) {
x += i;
}
if (x !== expectedX) {
throw new Error("Error in test");
}
return now() - start;
}
<input id="btn" type="button" value="Start">
It says that there's no significant difference in that synthetic test on either V8/Chrome or SpiderMonkey/Firefox. (Repeated tests in both browsers have one winning, or the other winning, and in both cases within a margin of error.) But again, it's a synthetic benchmark, not your code. Worry about the performance of your code when and if your code has a performance problem.
As a style matter, I prefer let for the scoping benefit and the closure-in-loops benefit if I use the loop variable in a closure.
Details
The important difference between var and let in a for loop is that a different i is created for each iteration; it addresses the classic "closures in loop" problem:
function usingVar() {
for (var i = 0; i < 3; ++i) {
setTimeout(function() {
console.log("var's i: " + i);
}, 0);
}
}
function usingLet() {
for (let i = 0; i < 3; ++i) {
setTimeout(function() {
console.log("let's i: " + i);
}, 0);
}
}
usingVar();
setTimeout(usingLet, 20);
Creating the new EnvironmentRecord for each loop body (spec link) is work, and work takes time, which is why in theory the let version is slower than the var version.
But the difference only matters if you create a function (closure) within the loop that uses i, as I did in that runnable snippet example above. Otherwise, the distinction can't be observed and can be optimized away.
Here in 2018, it looks like V8 (and SpiderMonkey in Firefox) is doing sufficient introspection that there's no performance cost in a loop that doesn't make use of let's variable-per-iteration semantics. See this test.
In some cases, const may well provide an opportunity for optimization that var wouldn't, especially for global variables.
The problem with a global variable is that it's, well, global; any code anywhere could access it. So if you declare a variable with var that you never intend to change (and never do change in your code), the engine can't assume it's never going to change as the result of code loaded later or similar.
With const, though, you're explicitly telling the engine that the value cannot change¹. So it's free to do any optimization it wants, including emitting a literal instead of a variable reference to code using it, knowing that the values cannot be changed.
¹ Remember that with objects, the value is a reference to the object, not the object itself. So with const o = {}, you could change the state of the object (o.answer = 42), but you can't make o point to a new object (because that would require changing the object reference it contains).
When using let or const in other var-like situations, they're not likely to have different performance. This function should have exactly the same performance whether you use var or let, for instance:
function foo() {
var i = 0;
while (Math.random() < 0.5) {
++i;
}
return i;
}
It's all, of course, unlikely to matter and something to worry about only if and when there's a real problem to solve.
"LET" IS BETTER IN LOOP DECLARATIONS
With a simple test (5 times) in navigator like that:
// WITH VAR
console.time("var-time")
for(var i = 0; i < 500000; i++){}
console.timeEnd("var-time")
The mean time to execute is more than 2.5ms
// WITH LET
console.time("let-time")
for(let i = 0; i < 500000; i++){}
console.timeEnd("let-time")
The mean time to execute is more than 1.5ms
I found that loop time with let is better.
T.J. Crowder's answer is so excellent.
Here is an addition of: "When would I get the most bang for my buck on editing existing var declarations to const ?"
I've found that the most performance boost had to do with "exported" functions.
So if file A, B, R, and Z are calling on a "utility" function in file U that is commonly used through your app, then switching that utility function over to "const" and the parent file reference to a const can eak out some improved performance. It seemed for me that it wasn't measurably faster, but the overall memory consumption was reduced by about 1-3% for my grossly monolithic Frankenstein-ed app. Which if you're spending bags of cash on the cloud or your baremetal server, could be a good reason to spend 30 minutes to comb through and update some of those var declarations to const.
I realize that if you read into how const, var, and let work under the covers you probably already concluded the above... but in case you "glanced" over it :D.
From what I remember of the benchmarking on node v8.12.0 when I was making the update, my app went from idle consumption of ~240MB RAM to ~233MB RAM.
T.J. Crowder's answer is very good but :
'let' is made to make code more readable, not more powerful
by theory let will be slower than var
by practice the compiler can not solve completely (static analysis) an uncompleted program so sometime it will miss the optimization
in any-case using 'let' will require more CPU for introspection, the bench must be started when google v8 starts to parse
if introspection fails 'let' will push hard on the V8 garbage collector, it will require more iteration to free/reuse. it will also consume more RAM. the bench must take these points into account
Google Closure will transform let in var...
The effect of the performance gape between var and let can be seen in real-life complete program and not on a single basic loop.
Anyway, to use let where you don't have to, makes your code less readable.
Just did some more tests, Initially I concluded that there is a substantial difference in favor of var. My results initially showed that between Const / Let / Var there was a ratio from 4 / 4 / 1 to 3 / 3 / 1 in execution time.
After Edit in 29/01/2022 (according to jmrk's remark to remove global variables in let and const tests) now results seem similar 1 / 1 / 1.
I give the code used below. Just let me mention that I started from the code of AMN and did lots of tweaking, and editing.
I did the tests both in w3schools_tryit editor and in Google_scripts
My Notes:
In GoogleScripts there seems that the 1st test ALWAYS takes longer, no-matter which one, especially for reps<5.000.000 and before separating them in individual functions
For Reps < 5.000.000 JS engine optimizations are all that matters, results go up and down without safe conclusions
GoogleScripts constantly does ~1.5x time longer, I think it is expected
There was a BIG difference when all tests where separated in individual functions, execution speed was at-least doubled and 1st test's delay almost vanished!
Please don't judge the code, I did try but don't pretend to be any expert in JS.
I would be delighted to see your tests and opinions.
function mytests(){
var start = 0;
var tm1=" Const: ", tm2=" Let: ", tm3=" Var: ";
start = Date.now();
tstLet();
tm2 += Date.now() - start;
start = Date.now();
tstVar();
tm3 += Date.now() - start;
start = Date.now();
tstConst();
tm1 += (Date.now() - start);
var result = "TIMERS:" + tm1 + tm2 + tm3;
console.log(result);
return result;
}
// with VAR
function tstVar(){
var lmtUp = 50000000;
var i=0;
var item = 2;
var sum = 0;
for(i = 0; i < lmtUp; i++){sum += item;}
item = sum / 1000;
}
// with LET
function tstLet(){
let lmtUp = 50000000;
let j=0;
let item = 2;
let sum=0;
for( j = 0; j < lmtUp; j++){sum += item;}
item = sum/1000;
}
// with CONST
function tstConst(){
const lmtUp = 50000000;
var k=0;
const item = 2;
var sum=0;
for( k = 0; k < lmtUp; k++){sum += item;}
k = sum / 1000;
}
code with 'let' will be more optimized than 'var' as variables declared with var do not get cleared when the scope expires but variables declared with let does. so var uses more space as it makes different versions when used in a loop.

Why is repeated sub-object navigation faster than using a short-hand variable?

Sorry that I didn't know how to properly phrase the question, but here's the issue:
var o = {
my: {
very: {
deep: {
sub: {
fn(x) {
return x + 1;
}
}
}
}
}
};
var n = 0;
//without short-hand function
var timeStart = performance.now();
for (var i = 0; i < 5000000; i++) {
n += o.my.very.deep.sub.fn(i);
}
var timeEnd = performance.now();
console.log(timeEnd - timeStart);
n = 0;
//with short-hand function
var fn = o.my.very.deep.sub.fn;
timeStart = performance.now();
for (var i = 0; i < 5000000; i++) {
n += fn(i);
}
timeEnd = performance.now();
console.log(timeEnd - timeStart);
The question is: Why is the code in the first loop 5-7% faster (when executed in global scope), though it has to repeatedly go down the sub objects, while in the second loop, it uses the short-hand fn instead?
PS: If you find a better question title, feel free to edit.
var o = {
my: {
very: {
deep: {
sub: {
fn(x) {
return x + 1;
}
}
}
}
}
};
var n = 0;
//without short-hand function
var timeStart = performance.now();;
for (var i = 0; i < 5000000; i++) {
n += o.my.very.deep.sub.fn(i);
}
var timeEnd = performance.now();;
console.log(timeEnd - timeStart);
n = 0;
//with short-hand function
var fn = o.my.very.deep.sub.fn;
timeStart = performance.now();
for (var i = 0; i < 5000000; i++) {
n += fn(i);
}
timeEnd = performance.now();
console.log(timeEnd - timeStart);
Running this (note the usage of performance.now() instead of Date) the seems to be a very small margin of difference between the two. On some runs the first implementation will be (very very slightly) ahead, in some the latter. Without an isolated test environment it might as well be as good as you can get but, in essence, this level of testing will show you that there is very little, if any, difference between the two approaches.
The second one is faster because you are not doing property lookup on each iteration.
on the first one, you are saying go to o then my then very then deep then sub then execute the function o.my.very.deep.sub.fn(i) for every iteration
but on the second one you are caching the fn up top with var fn = o.my.very.deep.sub.fn; now it does not have to do the lookup for fn in each iteration, avoiding extra work
some engines like v8 do inline caching for optimizations, they might not even do dynamic lookups for each iteration but it all depends on the implementation. check this to understand how v8 implements it. fast property access also how v8 optimizes property access but the takeaway is that
V8 is generally pretty optimistic about your code and tries to speed
it up as much as it can. But sometimes the assumptions that it makes
are not valid (a hidden class wasn't the one expected). In this case,
V8 will replace Inline Cache fast path code with full non-optimized
code.

indexOf() : is there a better way to implement this?

EDIT
Thank you guys, and i apologize for not being more specific in my question.
This code was written to check if a characters in the second string is in the first string. If so, it'll return true, otherwise a false.
So my code works, I know that much, but I am positive there's gotta be a better way to implement this.
Keep in mind this is a coding challenge from Freecodecamp's Javascript tree.
Here's my code:
function mutation(arr) {
var stringOne = arr[0].toLowerCase();
var stringTwo = arr[1].toLowerCase().split("");
var i = 0;
var truthyFalsy = true;
while (i < arr[1].length && truthyFalsy) {
truthyFalsy = stringOne.indexOf(stringTwo[i]) > -1;
i++
}
console.log(truthyFalsy);
}
mutation(["hello", "hey"]);
//mutation(["hello", "yep"]);
THere's gotta be a better way to do this. I recently learned about the map function, but not sure how to use that to implement this, and also just recently learned of an Array.prototype.every() function, which I am going to read tonight.
Suggestions? Thoughts?
the question is very vague. however what i understood from the code is that you need to check for string match between two strings.
Since you know its two strings, i'd just pass them as two parameters. additionally i'd change the while into a for statement and add a break/continue to avoid using variable get and set.
Notice that in the worst case its almost the same, but in the best case its half computation time.
mutation bestCase 14.84499999999997
mutation worstCase 7.694999999999993
bestCase: 5.595000000000027
worstCase: 7.199999999999989
// your function (to check performance difference)
function mutation(arr) {
var stringOne = arr[0].toLowerCase();
var stringTwo = arr[1].toLowerCase().split("");
var i = 0;
var truthyFalsy = true;
while (i < arr[1].length && truthyFalsy) {
truthyFalsy = stringOne.indexOf(stringTwo[i]) > -1;
i++
}
return truthyFalsy;
}
function hasMatch(base, check) {
var strOne = base.toLowerCase();
var strTwo = check.toLowerCase().split("");
var truthyFalsy = false;
// define both variables (i and l) before the loop condition in order to avoid getting the length property of the string multiple times.
for (var i = 0, l = strTwo.length; i < l; i++) {
var hasChar = strOne.indexOf(strTwo[i]) > -1;
if (hasChar) {
//if has Char, set true and break;
truthyFalsy = true;
break;
}
}
return truthyFalsy;
}
var baseCase = "hello";
var bestCaseStr = "hey";
var worstCaseStr = "yap";
//bestCase find match in first iteration
var bestCase = hasMatch("hello", bestCaseStr);
console.log(bestCase);
//worstCase loop over all of them.
var worstCase = hasMatch("hello", worstCaseStr);
console.log(worstCase);
// on your function
console.log('mutation bestCase', checkPerf(mutation, [baseCase, bestCaseStr]));
console.log('mutation worstCase', checkPerf(mutation, [baseCase, worstCaseStr]));
// simple performance check
console.log('bestCase:', checkPerf(hasMatch, baseCase, bestCaseStr));
console.log('worstCase:', checkPerf(hasMatch, baseCase, worstCaseStr));
function checkPerf(fn) {
var t1 = performance.now();
for (var i = 0; i < 10000; i++) {
fn(arguments[1], arguments[2]);
}
var t2 = performance.now();
return t2 - t1;
}

Why is a custom array reverse implementation twice as fast versus .reverse() in V8

A simple implementation for reversing an array is twice as fast compared to the built in function in Javascript, when tested in Chrome. What's V8 doing? Here is the test:
var newArr = [];
var newArrDefault = [];
for(var i = 0; i < 10000000; i++){
newArr[i] = i;
newArrDefault[i] = i;
}
var startDefault = new Date();
newArrDefault.reverse();
console.log("Built in method took " + (new Date().getTime() - startDefault.getTime()));
var start = new Date();
for(var i = 0; i < newArr.length / 2; i++){
var tmp = newArr[i];
newArr[i] = newArr[newArr.length-i-1];
newArr[newArr.length-i-1] = tmp;
}
console.log("Custom method took " + (new Date().getTime() - start.getTime()));
Results on Version 20.0.1132.47 Ubuntu 12.04 (144678):
Built in method took 149
Custom method took 71
For the fun of it, I implemented the specification like so:
var upper, upperExists, lowerExists, lowerValue, upperValue;
for(var lower = 0, len = newArr.length >>> 0, middle = Math.floor(len / 2); lower != middle; ++lower) {
upper = len - lower - 1;
lowerValue = newArr[lower];
upperValue = newArr[upper];
lowerExists = newArr.hasOwnProperty(lower);
upperExists = newArr.hasOwnProperty(upper);
if (lowerExists && upperExists) {
newArr[lower] = upperValue;
newArr[upper] = lowerValue;
} else if (upperExists) {
newArr[lower] = upperValue;
delete newArr[upper];
} else if (lowerExists) {
newArr[upper] = lowerValue;
delete newArr[lower];
}
}
The jsperf can be found here.
It includes a whole bunch of code to deal with missing entries, which is why it's so much slower than both the native and your code (some optimizations may be possible, but it won't affect the performance enough). The performance difference between your code and the native implementation wasn't very conclusive though.
Under most circumstances arrays are a contiguous block of values with no gaps in between, so you should be safe with that kind of code; as long as you know the difference :)

Javascript variable stays undefined

I have a problem with this script, something is going wrong.
Rnumer stays undefined.This script should return and write all uneven digits from the random number list. Can someone tell me what I do wrong. Thanks in advance
var Rnumber = new Array();
for (i = 0; i<= 100;i++)
{
Rnumber[i] = Math.ceil(Math.random()*101);
// document.write(Rnumber[i] + "<br/>");
}
function unevenAndDivisible(Rnumber)
{
var remainder = new Array();
for (i = 0; i<= 100; i++)
{
remainder = parseInt(Rnumber[i])%2;
}
return remainder;
}
document.write(unevenAndDivisible());
Changed to
var Rnumber = new Array();
for (i = 0; i<= 100;i++)
{
Rnumber[i] = Math.ceil(Math.random()*101);
// document.write(Rnumber[i] + "<br/>");
}
function unevenAndDivisible(Rnumber)
{
var remainder = new Array();
for (i = 0; i<= 100; i++)
{
remainder[i] = Rnumber[i]%2;
}
return remainder;
}
document.write(unevenAndDivisible(Rnumber));
but now i get the result :
0,1,0,0,1,0,0,0,1,1,0,0,1,0,1,1,1....
I simply want maybe I asked it wrong the first time, to write al uneven numbers from the random list of Rnumbers
Then I need to divide that through 7 and return that.
EDIT
Allmost all problems are clear , thanks everyone for that.
Their is still one question left:
In this code below it only take the first uneven value from remainder and I want that it takes all values that are uneven to the next if statement to check %7.
Maybe you see the problem better if you run it for youreself
var Rnumber = new Array();
for (i = 0; i<= 100;i++)
{
Rnumber[i] = Math.ceil(Math.random()*101);
}
function unevenAndDivisible()
{
var remainder = [];
var answer = [];
for (i = 0; i<= 100; i++)
{
if (Rnumber[i]%2 !== 0)
{
remainder.push(Rnumber[i]);
for (c = 0; c <= remainder.length;c++)
{
if (remainder[c]%7 == 0)
{
answer.push(remainder[c]);
}
}
}
}
return answer;
}
answer = unevenAndDivisible();
document.write(answer);
Problem solved , Thanks everyone
You don't need to pass Rnumber to the function, as it's already available in scope:
function unevenAndDivisible()
{
var remainder = [];
for (i = 0; i<= 100; i++)
{
if (Rnumber[i]%2 !== 0) {
remainder.push(Rnumber[i]);
}
}
return remainder;
}
remainder = unevenAndDivisible();
console.log(remainder);
JS Fiddle demo.
Edited in response to question from OP (in comments to question, above):
...can someone explain what this mean: var remainder = [];
Sure, it's array-literal notation, which is equal to: var remainder = new Array();, it's just a little more concise, and I prefer to save myself the typing. I get the impression, from JS Lint, whenever I use var x = new Array(); therein that the above version is generally preferred (since it complains otherwise), but I don't know why.
Either pass Rnumber to the function unevenAndDivisible or omit it from the argument list. Since it is an argument, it has more local scope than the initial declaration of Rnumber.
Your problem is the line
function unevenAndDivisible(Rnumber)
You are passing in Rnumber in as an argument, but when you call unevenAndDivisible()
you are not passing it it.
Consequently for the body of the function Rnumber is undefined (cause you passed nothing in)
The following snippet is equivalent to what you wrote nad might explain better
function unevenAndDivisible(xyz)
{
var remainder = new Array();
for (i = 0; i<= 100; i++)
{
remainder = parseInt(xyz[i])%2;
}
return remainder;
}
then called as
unevenAndDivisible(undefined)
to fix it remove the argument from the call definition
i.e. define it as
function unevenAndDivisible()
1 - you is not defining the Rnumber value that's function argument.
2 - in loop, you're defining remainder to divised value of ranumber and is not saving in array; try:
change:
remainder = parseInt(Rnumber[i])%2;
to
remainder[i] = parseInt(Rnumber[i])%2;
var array = [],
i = 100;
while (i--) {
var random = Math.random()*i|0;
if(random % 2)
array.unshift(random);
}
alert(array);

Categories

Resources