How do Javascript Math.max and Math.min actually work? - javascript

I am really curious how these functions actually work? I know there are a lot of questions about how to use these, I already know how to use them, but I couldn't find anywhere how to actually go about implementing this functionality on an array, for example, if there were no such functions? How would you code such a function if there were no helpers?

Here is the Math.max code in Chrome V8 engine.
function MathMax(arg1, arg2) { // length == 2
var length = %_ArgumentsLength();
if (length == 2) {
arg1 = TO_NUMBER(arg1);
arg2 = TO_NUMBER(arg2);
if (arg2 > arg1) return arg2;
if (arg1 > arg2) return arg1;
if (arg1 == arg2) {
// Make sure -0 is considered less than +0.
return (arg1 === 0 && %_IsMinusZero(arg1)) ? arg2 : arg1;
}
// All comparisons failed, one of the arguments must be NaN.
return NaN;
}
var r = -INFINITY;
for (var i = 0; i < length; i++) {
var n = %_Arguments(i);
n = TO_NUMBER(n);
// Make sure +0 is considered greater than -0.
if (NUMBER_IS_NAN(n) || n > r || (r === 0 && n === 0 && %_IsMinusZero(r))) {
r = n;
}
}
return r;
}
Here is the repository.

Below is how to implement the functions if Math.min() and Math.max() did not exist.
Functions have an arguments object, which you can iterate through to get its values.
It's important to note that Math.min() with no arguments returns Infinity, and Math.max() with no arguments returns -Infinity.
function min() {
var result= Infinity;
for(var i in arguments) {
if(arguments[i] < result) {
result = arguments[i];
}
}
return result;
}
function max() {
var result= -Infinity;
for(var i in arguments) {
if(arguments[i] > result) {
result = arguments[i];
}
}
return result;
}
//Tests
console.log(min(5,3,-2,4,14)); //-2
console.log(Math.min(5,3,-2,4,14)); //-2
console.log(max(5,3,-2,4,14)); //14
console.log(Math.max(5,3,-2,4,14)); //14
console.log(min()); //Infinity
console.log(Math.min()); //Infinity
console.log(max()); //-Infinity
console.log(Math.max()); //-Infinity

Let's take a look at the specifications (which could/should help you in implementation!)
In ECMAScript 1st Edition (ECMA-262) (the initial definitions for both Math.max/min), we see the following:
15.8.2.11 max(x, y)
Returns the larger of the two arguments.
• If either argument is NaN, the result is NaN.
• If x>y, the result is x.
• If y>x, the result is y.
• If x is +0 and y is +0, the result is +0.
• If x is +0 and y is −0, the result is +0.
• If x is −0 and y is +0, the result is +0.
• If x is −0 and y is −0, the result is −0.
15.8.2.12 min(x, y)
Returns the smaller of the two arguments.
• If either argument is NaN, the result is NaN.
• If x<y, the result is x.
• If y<x, the result is y.
• If x is +0 and y is +0, the result is +0.
• If x is +0 and y is −0, the result is −0.
• If x is −0 and y is +0, the result is −0.
• If x is −0 and y is −0, the result is −0.
Later versions of the specification give us:
ECMAScript 5.1
15.8.2.11 max ( [ value1 [ , value2 [ , … ] ] ] )
Given zero or more arguments, calls ToNumber on each of the arguments and returns the largest of the resulting values.
• If no arguments are given, the result is −∞.
• If any value is NaN, the result is NaN.
• The comparison of values to determine the largest value is done as in 11.8.5 except that +0 is considered to be larger than −0.
The length property of the max method is 2.
15.8.2.12 min ( [ value1 [ , value2 [ , … ] ] ] )
Given zero or more arguments, calls ToNumber on each of the arguments and returns the smallest of the resulting values.
• If no arguments are given, the result is +∞.
• If any value is NaN, the result is NaN.
• The comparison of values to determine the smallest value is done as in 11.8.5 except that +0 is considered to be larger than −0.
The length property of the min method is 2.
The reference to 11.8.5 can be found here: The Abstract Relational Comparison Algorithm
ECMAScript 2015
20.2.2.24 Math.max ( value1, value2 , …values )
Given zero or more arguments, calls ToNumber on each of the arguments and returns the largest of the resulting values.
• If no arguments are given, the result is −∞.
• If any value is NaN, the result is NaN.
• The comparison of values to determine the largest value is done using the Abstract Relational Comparison algorithm (7.2.11) except that +0 is considered to be larger than −0.
The length property of the max method is 2.
20.2.2.25 Math.min ( value1, value2 , …values )
Given zero or more arguments, calls ToNumber on each of the arguments and returns the smallest of the resulting values.
• If no arguments are given, the result is +∞.
• If any value is NaN, the result is NaN.
• The comparison of values to determine the smallest value is done using the Abstract Relational Comparison algorithm (7.2.11) except that +0 is considered to be larger than −0.
The length property of the min method is 2.
And again, 7.2.11 can be found here: Abstract Relational Comparison

Basic functionality:
Math.max() and Math.min() are used on numbers (or what they can coerce into numbers) you cannot directly pass an array as a parameter.
Ex:
Math.max(1,52,28)
You can have an number of comma delimited numbers.
Arrays:
This example shows how one could apply them to arrays:
JavaScript: min & max Array values?
Basically the following works:
Math.max.apply(null, [1,5,2,3]);
Why that works?
This works because apply is a function that all functions have which applies a function with the arguments of an array.
Math.max.apply(null, [1,5,2,3]) is the same as Math.max(1,5,2,3)

Well, here's min without Math.min (code is in ES6).
function min() {
return Array.from(arguments).reduce(
(minSoFar, next) => minSoFar < next ? minSoFar : next
, Infinity)
}
The same logic could be implemented with a simple loop. You would just need to keep track of one of variable through your iteration, which is the lowest value you've seen so far. The initial value of minSoFar would be Infinity. The reason is that any Number except Infinity is less than Infinity, but in the case of no arguments sent, we want to return Infinity itself, because that's what Math.min() with no arguments evaluates to.
function min() {
let minSoFar = Infinity
for(let i = 0, l = arguments.length; i < l; i++) {
const next = arguments[i]
minSoFar = minSoFar < next ? minSoFar : next
}
return minSoFar
}
Max can be implemented with pretty much the same logic, only you're keeping track of the highest value you've seen so far, and the initial value is -Infinity.

This is easy to implement with Array.prototype.reduce:
function min() {
var args = Array.prototype.slice.call(arguments);
var minValue = args.reduce(function(currentMin, nextNum) {
if (nextNum < currentMin) {
// nextNum is less than currentMin, so we return num
// which will replace currentMin with nextNum
return nextNum;
}
else {
return currentMin;
}
}, Infinity);
return minValue;
}

Here is the implementation of Math.min and Math.max from a real Javascript engine
Math.max : https://github.com/v8/v8/blob/cd81dd6d740ff82a1abbc68615e8769bd467f91e/src/js/math.js#L78-L102
Math.min : https://github.com/v8/v8/blob/cd81dd6d740ff82a1abbc68615e8769bd467f91e/src/js/math.js#L105-L129

Related

why the small number string is greater than big number string in javascript [duplicate]

The comparison operators like > and < return Boolean value when their input is given as two string values.
I tried few examples:
/* String vs String */
console.log('firstName' < 'lastname'); // true
console.log('firstName' < 'Firstname'); // false
console.log('!firstName' < 'lastname'); // true
console.log('!firstName' < 'Firstname'); // true
console.log('!firstName' < '!Firstname'); // false
console.log('!firstName' < '_!Firstname'); // true
console.log('#!firstName' < '_!Firstname'); // true
console.log('#!firstName' < '2_!Firstname'); // false
/* String vs Number */
console.log('#!firstName' < 2); // false
console.log('#!firstName' < -1); // false
/* String vs Special Numbers */
console.log('#!firstName' < Infinity); // false
console.log('#!firstName' < -Infinity); // false
console.log('#!firstName' < -Infinity + Infinity); // false
/* String vs NaN */
console.log('#!firstName' < NaN); // false
console.log(NaN.toString()); // "NaN"
console.log('#!firstName' < "NaN"); // true
/* String vs Arrays */
console.log('firstName' < [Infinity, -Infinity]); // false
console.log('firstName' < ['Firstname', Infinity, -Infinity]); // false
console.log('firstName' < ['2_Firstname', Infinity, -Infinity]); // false
I'm really curious to know how JavaScript really evaluates such expressions. In the above examples, I find this one as the most fascinating one console.log('#!firstName' < Infinity); // false.
So, the question I have is:
How is the comparison done using "is greater than" and "is
less than" operators in JavaScript in these scenarios (from above examples):
String vs String,
String vs Number,
String vs Special Numbers,
String vs NaN,
String vs Arrays
As said above, the formal specification is in the standard: http://www.ecma-international.org/ecma-262/7.0/#sec-abstract-relational-comparison , in layman's terms the logic is like this:
1) String vs String
Split both strings into 16-bit code units and compare them numerically. Note that code units != characters, e.g. "cafè" < "cafè" is true (really).
2) String vs other primitive
Convert both to numbers. If one of them is NaN, return false, otherwise compare numerically. +0 and -0 are considered equal, +/-Infinity is bigger/smaller than anything else.
3) String vs Object
Try to convert the object to a primitive, attempting, in order, [Symbol.toPrimitive]("number"), valueOf and toString. If we've got string, proceed to 1), otherwise proceed to 2). For arrays specifically, this will invoke toString which is the same as join.
String, String comparison is based on Unicode ordering (a is greater than A).
String, Number comparison first converts the string into a number before comparing (same with infinity).
String, Array comparison first converts the array into a string and then compares as above.
Javascript String Comparison
Javascript Object Comparison
The precise steps to take are described in the specification, which specifically describes what to do in the case that one (or both) sides of the comparison are NaN or +Infinity or -Infinity. For px < py, for example, the less-than operator calls the Abstract Relational Comparison Algorithm:
11.8.5 The Abstract Relational Comparison Algorithm
(If both items being compared are not strings, then:)
Let nx be the result of calling ToNumber(px). Because px and py are primitive values evaluation order is not important.
Let ny be the result of calling ToNumber(py).
If nx is NaN, return undefined.
If ny is NaN, return undefined.
If nx and ny are the same Number value, return false.
If nx is +0 and ny is −0, return false.
If nx is −0 and ny is +0, return false.
If nx is +∞, return false.
If ny is +∞, return true.
If ny is −∞, return false.
If nx is −∞, return true.
If the mathematical value of nx is less than the mathematical value of ny —note that these mathematical values are both finite and not both zero—return true. Otherwise, return false.
Else, both px and py are Strings
If py is a prefix of px, return false. (A String value p is a prefix of String value q if q can be the result of concatenating p and some other String r. Note that any String is a prefix of itself, because r may be the empty String.)
If px is a prefix of py, return true.
Let k be the smallest nonnegative integer such that the character at position k within px is different from the character at position k within py. (There must be such a k, for neither String is a prefix of the other.)
Let m be the integer that is the code unit value for the character at position k within px.
Let n be the integer that is the code unit value for the character at position k within py.
If m < n, return true. Otherwise, return false.
When both items being compared are strings, it effectively results in the code points of each character being compared. For example, 'firstName' < 'lastname' because the character code of f (102) is smaller than the character code of l (108). For '!firstName' < 'Firstname', the character code of ! (33) is smaller than the character code of F (70), so that evaluates to true as well. See the following snippet for an example of the implementation:
function compare(left, right) {
for (let i = 0; i < left.length; i++) {
const c1 = left[i].charCodeAt();
const c2 = right[i].charCodeAt();
if (c1 !== c2) {
console.log('Char code comparision:', c1 < c2, '< comparison:', left < right);
break;
}
}
}
/* String vs String */
compare('firstName', 'lastname'); // true
compare('firstName', 'Firstname'); // false
compare('!firstName', 'lastname'); // true
compare('!firstName', 'Firstname'); // true
compare('!firstName', '!Firstname'); // false
compare('!firstName', '_!Firstname'); // true
compare('#!firstName', '_!Firstname'); // true
compare('#!firstName', '2_!Firstname'); // false

Why does Javascript not follow ECMA specs on Infinity/NaN comparisons?

I have tested in Chrome, Firefox, Safari. They all give the same results on these comparisons.
0 < NaN returns false.
Infinity < Infinity returns false.
-Infinity < -Infinity returns false.
While according to the Abstract Relational Comparison algorithm, in the 4h and 4i steps, the above expressions should return undefined, true, true.
What am I missing here?
lval < rval, when evaluated, does:
Let r be the result of performing Abstract Relational Comparison lval < rval.
ReturnIfAbrupt(r).
If r is undefined, return false. Otherwise, return r.
Although "Abstract Relational Comparison" (ARC) may return undefined, the final result of the evaluation of the < operator is always true or false.
The actual comparison of numbers to other numbers is shown in 6.1.6.1.12 Number::lessThan ( x, y ); see how ARC says:
f. If Type(nx) is the same as Type(ny), return Type(nx)::lessThan(nx, ny).
So nothing below step F in ARC is relevant for these expressions you're checking, because in each of the expressions, you're comparing a number to another number.
0 < NaN fulfills step 2 of lessThan:
If y is NaN, return undefined.
resulting in ARC returning undefined, resulting in a final value of false: If r is undefined, return false..
Infinity < Infinity first fulfills step 6, which is:
If x is +∞, return false.
-Infinity < -Infinity first fulfills step 8, which is:
If y is -∞, return false.

Why `Number(new Boolean(false)) === 0`

Boolean(new Boolean(...)) === true because new Boolean(...) is an object.
But why Number(new Boolean(false)) === 0 (+new Boolean(false) === 0) and Number(new Boolean(true)) === 1? Why not NaN*?
Why in the first example there is no unboxing, but in the second case there is it?
*isNaN(Number({})) === true
As #ASDFGerte mentioned. This is because the ToNumber() method which is called by the Number() constructor will call .ToPrimitive() on the argument if an object is passed. This is why it's treated as a Boolean primitive rather than an object.
isNaN(Number({})) === true
While this is correct, I think you're equating an object with a Boolean object and the two are not equivalent.
Let's start from the most important thing - Number converts the argument its given into a numeric. However, it doesn't arbitrarily do that - there are rules about numeric conversion and when it comes to objects, it's not as simple as "all objects are NaN". Consider this:
const obj0 = {}
const obj1 = {
toString() {
return 1;
}
}
const obj2 = {
toString() {
return 1;
},
valueOf() {
return 2;
}
}
const obj3 = {
toString() {
return 1;
},
valueOf() {
return 2;
},
[Symbol.toPrimitive]() {
return 3;
}
}
const obj4 = Object.create(null);
console.log(Number(obj0)); //NaN
console.log(Number(obj1)); //1
console.log(Number(obj2)); //2
console.log(Number(obj3)); //3
console.log(Number(obj4)); //Error
Not all objects are equal when converting to a number. Some happen to be even more unequal than others.
When Number is given an object, it goes through the process to convert it to a primitive with a preference (hint) for a number. To do this, it will go through the following steps:
Determine the hint to be "number".
Check if the object implements the ##toPrimitive method.
if so, it will call it with the hint ("number")
If that doesn't exist it will then look for a valueOf method.
this is done because the hint is "number", so valueOf will be checked first.
If that doesn't exist, then it will check for a toString method
again, this is based on the hint being "number". If the hint was "string", the last two steps would be reversed.
If that doesn't exist, then raise an error.
Once an appropriate method has been found, it's executed and the value returned will be transformed into numeric.
We haven't touched Boolean yet - this is just how the generic Number does the conversion. So, in summary - an object can be converted to a primitive number, if it implements the correct functionality to do so.
Boolean objects do implement the correct functionality - they have a valueOf method that returns the primitive boolean they hold:
const T1 = new Boolean(true);
const T2 = new Boolean(true);
console.log("T1.valueOf()", T1.valueOf());
console.log("typeof T1.valueOf()", typeof T1.valueOf());
console.log("T1 === T2", T1 === T2);
console.log("T1.valueOf() === T2.valueOf()", T1.valueOf() === T2.valueOf());
So, in that case:
Number(new Boolean(true)) = Number(new Boolean(true).valueOf()) = Number(true)
And if we generalise it a bit, then: Number(new Boolean(bool)) = Number(bool)
From the ToNumber conversion we know that true is turned into 1 while false is turned into 0. Thus the equality Number(new Boolean(false)) === 0 makes perfect sense, since Number(false) is indeed 0. Same with Number(new Boolean(true)) === 1.
Boolean objects have a valueOf method, and it can be used to customize the primitive value of the object in type conversions.
Boolean#valueOf() returns true for new Boolean(true) and false for new Boolean(false).
This method is internally called by both the Number function and the unary plus (+) operator, so the code becomes to:
Number(true)
which is equal to 1, as true has the numeric value of 1.
You can also implement a valueOf function on any object, to make it have a custom value, for example:
const object={
valueOf(){
return 10
}
}
console.log(Number(object)) //10
Because true represents 1
and false represents 0
0 is false because they’re both zero elements in common
[semirings][Semiring on Wikipedia]. Even though they are distinct data
types, it makes intuitive sense to convert between them because they
belong to isomorphic algebraic structures.
0 is the identity for addition and zero for multiplication. This is true for integers and rationals, but not IEEE-754 floating-point
numbers: 0.0 * NaN = NaN and 0.0 * Infinity = NaN.
false is the identity for Boolean xor (⊻) and zero for Boolean and (∧). If Booleans are represented as {0, 1}—the set of integers modulo 2—you can think of ⊻ as addition without carry and ∧ as
multiplication.
"" and [] are identity for concatenation, but there are several operations for which they make sense as zero. Repetition is one, but
repetition and concatenation do not distribute, so these operations
don’t form a semiring.
Such implicit conversions are helpful in small programs, but in the
large can make programs more difficult to reason about. Just one of
the many tradeoffs in language design.
[Semiring on Wikipedia]: http://en.wikipedia.org/wiki/Semiring
quote from
read this
1 = false and 0 = true?

Negation operator (!) used on a recursive call?

I can't figure out how this recursive call works. Using the not operator in the recursive call somehow makes this function determine if the argument given is odd or even. When the '!' is left out fn(2) and fn(5) both return true.
This example is taken out of JavaScript Allonge free e-book, which, so far has been excellent.
var fn = function even(n) {
if (n === 0) {
return true;
}
else return !even(n - 1);
}
fn(2); //=> true
fn(5); //=> false
If n === 0 the result is true.
If n > 0 it returns the inverse of n - 1.
If n === 1 it will return !even(0), or false.
If n === 2 it will return !even(1), or !!even(0), or true.
If n === 3 it will return !even(2), or !!even(1), or !!!even(0), or false.
And so on...
In general:
If n is even, the result is inverted an even number number of times, meaning it will return true.
If n is odd, the result is inverted an odd number number of times, meaning it will return false.
The above function reurns recursively the negation of it self.The base-case is when the number provided becomes zero and each time the function calls it self the number is decreased by one. As a result we have n recursive negations starting with true at base-case (where n is the number provided). For an odd number of negations given true as a starting value you get false as the result and for an even number you get true.
In summary:
Starting from given n
recursive reduction of n
Basecase: n=0 returns true
recursive negation of returned value(starting from true at base-case)
Result:
for odd number of negations the value returned is false
for even number of negations the value returned is true
Lets say we have example n=5
recursive reduction of n. Values of n at each level:
5
4
3
2
1
0 (base-case)
returned values at each level:
true (base case)
!true
!!true
!!!true
!!!!true
!!!!!true
A variant of you code could be:
function even(n) {
if (n === 0)
return true;
else
return odd(n - 1);
}
function odd(n) {
if (n === 1)
return true;
else
return even(n - 1);
}
We know that all positive numbers starting from 0 alternating between being even and odd. What you example does is defining odd to be !even which is correct since odd/even are disjoint. In your version using !, the not needs to be done as a continuation. That means every instance need to do something with the answer after the recursive call returns.

Javascript string/integer comparisons

I store some parameters client-side in HTML and then need to compare them as integers. Unfortunately I have come across a serious bug that I cannot explain. The bug seems to be that my JS reads parameters as strings rather than integers, causing my integer comparisons to fail.
I have generated a small example of the error, which I also can't explain. The following returns 'true' when run:
console.log("2" > "10")
Parse the string into an integer using parseInt:
javascript:alert(parseInt("2", 10)>parseInt("10", 10))
Checking that strings are integers is separate to comparing if one is greater or lesser than another. You should always compare number with number and string with string as the algorithm for dealing with mixed types not easy to remember.
'00100' < '1' // true
as they are both strings so only the first zero of '00100' is compared to '1' and because it's charCode is lower, it evaluates as lower.
However:
'00100' < 1 // false
as the RHS is a number, the LHS is converted to number before the comparision.
A simple integer check is:
function isInt(n) {
return /^[+-]?\d+$/.test(n);
}
It doesn't matter if n is a number or integer, it will be converted to a string before the test.
If you really care about performance, then:
var isInt = (function() {
var re = /^[+-]?\d+$/;
return function(n) {
return re.test(n);
}
}());
Noting that numbers like 1.0 will return false. If you want to count such numbers as integers too, then:
var isInt = (function() {
var re = /^[+-]?\d+$/;
var re2 = /\.0+$/;
return function(n) {
return re.test((''+ n).replace(re2,''));
}
}());
Once that test is passed, converting to number for comparison can use a number of methods. I don't like parseInt() because it will truncate floats to make them look like ints, so all the following will be "equal":
parseInt(2.9) == parseInt('002',10) == parseInt('2wewe')
and so on.
Once numbers are tested as integers, you can use the unary + operator to convert them to numbers in the comparision:
if (isInt(a) && isInt(b)) {
if (+a < +b) {
// a and b are integers and a is less than b
}
}
Other methods are:
Number(a); // liked by some because it's clear what is happening
a * 1 // Not really obvious but it works, I don't like it
Comparing Numbers to String Equivalents Without Using parseInt
console.log(Number('2') > Number('10'));
console.log( ('2'/1) > ('10'/1) );
var item = { id: 998 }, id = '998';
var isEqual = (item.id.toString() === id.toString());
isEqual;
use parseInt and compare like below:
javascript:alert(parseInt("2")>parseInt("10"))
Always remember when we compare two strings.
the comparison happens on chacracter basis.
so '2' > '12' is true because the comparison will happen as
'2' > '1' and in alphabetical way '2' is always greater than '1' as unicode.
SO it will comeout true.
I hope this helps.
You can use Number() function also since it converts the object argument to a number that represents the object's value.
Eg: javascript:alert( Number("2") > Number("10"))
+ operator will coerce the string to a number.
console.log( +"2" > +"10" )
The answer is simple. Just divide string by 1.
Examples:
"2" > "10" - true
but
"2"/1 > "10"/1 - false
Also you can check if string value really is number:
!isNaN("1"/1) - true (number)
!isNaN("1a"/1) - false (string)
!isNaN("01"/1) - true (number)
!isNaN(" 1"/1) - true (number)
!isNaN(" 1abc"/1) - false (string)
But
!isNaN(""/1) - true (but string)
Solution
number !== "" && !isNaN(number/1)
The alert() wants to display a string, so it will interpret "2">"10" as a string.
Use the following:
var greater = parseInt("2") > parseInt("10");
alert("Is greater than? " + greater);
var less = parseInt("2") < parseInt("10");
alert("Is less than? " + less);

Categories

Resources