parseInt method in JavaScript - javascript

I am not able to understand what is the use of second parameter of the parseInt method in JavaScript. Below are some of the outputs:
parseInt("9",10) ====> Output: 9
parseInt("9",100) ====> Output: NaN
parseInt("90",10) ====> Output: 9
parseInt("90",100) ====> Output: NaN
Kindly explain what is the use of the second parameter.

The second paramater is known as the Radix -- see here http://en.wikipedia.org/wiki/Radix, which is used to specify the numbering system. See here for a more detailed explanation http://mir.aculo.us/2010/05/12/adventures-in-javascript-number-parsing/

It's the base. So if it's 10, it operates as normally.
parseInt("9",8) ===> NaN
parseInt("7",8) ===> 7
Docs are here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt

The radix is used to specify the range of allowed numerals, also known as the number system or base; when omitted, it guesses the value by looking at the prefix of your string, e.g. a "0x" prefix means radix of 16 (hexadecimal). It's a good practice to always set this explicitly to 10 for the decimal system.
When a numeral is outside of the specified radix, parseInt() will return NaN.
As for the NaN result you're seeing with a radix of 100, though not documented per se, the allowed range of the radix is [1, 36], i.e. the largest base comprises digits and alphabets.

Quoting the MDN (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt):
radix
An integer that represents the radix of the above mentioned string. Always specify this parameter to eliminate reader confusion and to guarantee predictable behavior. Different implementations produce different results when a radix is not specified.
Think of it like the base of the number you try to parse, it will return the number in base 10:
parseInt(11,10) // -> 11
parseInt(11,2) //-> 3 (binary)
parseInt(1,6) // 1
parseInt(2,6) // 2
parseInt(3,6) // 3
parseInt(4,6) // 4
parseInt(5,6) // 5
parseInt(6,6) // NaN
parseInt(7,6) // NaN
parseInt(8,6) // NaN
parseInt(9,6) // NaN
parseInt(10,6) // 6, since 5 is the highest digit in base 6 and because of this after 5 comes 10

The 2nd parm is the radix (or base number). The defaut is base 10, so if you omit it, or use 10, then you'll get the same decimal number that you're parsing. If you specify base 16 (hex), then the parsed number is calculated thusly:
Examples:
parseInt("1",16) = 1
parseInt("2",16) = 2
parseInt("3",16) = 3
...
parseInt("9",16) = 9
parseInt("A",16) = 10
parseInt("B",16) = 11
...
parseInt("F",16) = 15
parseInt("10",16) = 16 (1x16 + 0)
parseInt("11",16) = 17 (1x16 + 1)
parseInt("12",16) = 18 (1x16 + 2)
Does that help?

first, some of your example is incorrect. parseInt("90",10) should be output 90.
the second parameter is ary. parseInt("90",10) means parse "90" as decimal.
but why parseInt(xxx, 100) output NaN? You should try parseInt("90", 36) and parseInt("90", 37).
Understand? there are only 36 letter so that no more letter to display the number witch in 37 hex.
finally, sorry about my english.

Related

I am puzzled when i use parseInt from javascript [duplicate]

I'm reading this but I'm confused by what is written in the parseInt with a radix argument chapter
Why is it that parseInt(8, 3) → NaN and parseInt(16, 3) → 1?
AFAIK 8 and 16 are not base-3 numbers, so parseInt(16, 3) should return NaN too
This is something people trip over all the time, even when they know about it. :-) You're seeing this for the same reason parseInt("1abc") returns 1: parseInt stops at the first invalid character and returns whatever it has at that point. If there are no valid characters to parse, it returns NaN.
parseInt(8, 3) means "parse "8" in base 3" (note that it converts the number 8 to a string; details in the spec). But in base 3, the single-digit numbers are just 0, 1, and 2. It's like asking it to parse "9" in octal. Since there were no valid characters, you got NaN.
parseInt(16, 3) is asking it to parse "16" in base 3. Since it can parse the 1, it does, and then it stops at the 6 because it can't parse it. So it returns 1.
Since this question is getting a lot of attention and might rank highly in search results, here's a rundown of options for converting strings to numbers in JavaScript, with their various idiosyncracies and applications (lifted from another answer of mine here on SO):
parseInt(str[, radix]) - Converts as much of the beginning of the string as it can into a whole (integer) number, ignoring extra characters at the end. So parseInt("10x") is 10; the x is ignored. Supports an optional radix (number base) argument, so parseInt("15", 16) is 21 (15 in hex). If there's no radix, assumes decimal unless the string starts with 0x (or 0X), in which case it skips those and assumes hex. (Some browsers used to treat strings starting with 0 as octal; that behavior was never specified, and was specifically disallowed in the ES5 specification.) Returns NaN if no parseable digits are found.
parseFloat(str) - Like parseInt, but does floating-point numbers and only supports decimal. Again extra characters on the string are ignored, so parseFloat("10.5x") is 10.5 (the x is ignored). As only decimal is supported, parseFloat("0x15") is 0 (because parsing ends at the x). Returns NaN if no parseable digits are found.
Unary +, e.g. +str - (E.g., implicit conversion) Converts the entire string to a number using floating point and JavaScript's standard number notation (just digits and a decimal point = decimal; 0x prefix = hex; 0o prefix = octal [ES2015+]; some implementations extend it to treat a leading 0 as octal, but not in strict mode). +"10x" is NaN because the x is not ignored. +"10" is 10, +"10.5" is 10.5, +"0x15" is 21, +"0o10" is 8 [ES2015+]. Has a gotcha: +"" is 0, not NaN as you might expect.
Number(str) - Exactly like implicit conversion (e.g., like the unary + above), but slower on some implementations. (Not that it's likely to matter.)
For the same reason that
>> parseInt('1foobar',3)
<- 1
In the doc, parseInt takes a string. And
If string is not a string, then it is converted to a string
So 16, 8, or '1foobar' is first converted to string.
Then
If parseInt encounters a character that is not a numeral in the specified radix, it ignores it and all succeeding characters
Meaning it converts up to where it can. The 6, 8, and foobar are ignored, and only what is before is converted. If there is nothing, NaN is returned.
/***** Radix 3: Allowed numbers are [0,1,2] ********/
parseInt(4, 3); // NaN - We can't represent 4 using radix 3 [allowed - 0,1,2]
parseInt(3, 3); // NaN - We can't represent 3 using radix 3 [allowed - 0,1,2]
parseInt(2, 3); // 2 - yes we can !
parseInt(8, 3); // NaN - We can't represent 8 using radix 3 [allowed - 0,1,2]
parseInt(16, 3); // 1
//'16' => '1' (6 ignored because it not in [0,1,2])
/***** Radix 16: Allowed numbers/characters are [0-9,A-F] *****/
parseInt('FOX9', 16); // 15
//'FOX9' => 'F' => 15 (decimal value of 'F')
// all characters from 'O' to end will be ignored once it encounters the out of range'O'
// 'O' it is NOT in [0-9,A-F]
Some more examples:
parseInt('45', 13); // 57
// both 4 and 5 are allowed in Radix is 13 [0-9,A-C]
parseInt('1011', 2); // 11 (decimal NOT binary)
parseInt(7,8); // 7
// '7' => 7 in radix 8 [0 - 7]
parseInt(786,8); // 7
// '78' => '7' => 7 (8 & next any numbers are ignored bcos 8 is NOT in [0-7])
parseInt(76,8); // 62
// Both 7 & 6 are allowed '76' base 8 decimal conversion is 62 base 10

Why does .toString(16) convert rgb, decimal, or other inputs into a hexidecimal

I've tried searching everywhere I could to find the answer as to why .toString(16) converts a number to a hexidecimal value. My first question is, why 16? My second question is, how can this return letters even though I see no letters going into the code. For example, I don't understand how the following code returns ff instead of a number.
var r = 255;
r.toString(16); //returns ff
If anyone has any links or insights as to why this is, please let me know. I'm very curious. Thank you in advance!
Hexadecimal is base 16. Break down the words: hexa, meaning 6; decimal, meaning 10. 10 + 6 = 16. A few major bases are:
Base 2: Binary, 2 numbers: [0, 1]
Base 8: Octal, 8 numbers: [0, 7]
Base 10: Decimal, 10 numbers: [0, 9]
Base 16: Hexadecimal, 16 symbols: [0, 9] and [A, F]
Per the MDN documentation:
For Number objects, the toString() method returns a string representation of the object in the specified radix.
Parameters
radix:
Optional. An integer between 2 and 36 specifying the base to use for representing numeric values.
This means it converts the number into a string, and based on the radix. The syntax for Number.prototype.toString is:
number.toString([radix])
Where radix is optional. If you specify the radix, it will convert with that base, so 16 is hexadecimal. If radix is not specified, 10 (decimal) is assumed. Here's a snippet:
var num = 16;
console.log(num.toString()) //"16", base 10 is assumed here if no radix given
console.log(num.toString(16)) //"10", base 16 is given
Now, regarding RGB values: take (255, 255, 255) [white] as an example. Each individual value (red, green, or blue) is represented in hex. Since 255 is 0xFF or simply FF in hex, the full representation is FFFFFF, or ffffff you see.
This is happening because you are using 16 as the radix, as explained here:
http://www.w3schools.com/jsref/jsref_tostring_number.asp
If you are just trying to get back "16"you can just do:
var number = 16;
var numberAsString = number.toString();
In returns a 'String' since toString() is set to return a String.
In addition, it is toString(16) because hexadecimal means 16 and it is base 16. Therefore, toString(16) converts a given variable into a String in a desired form, in this case you want it to be in the hexadecimal form.
http://www.w3schools.com/jsref/jsref_tostring_number.asp
Parameter: radix
Description: Optional. Which base to use for representing a numeric
value. Must be an integer between 2 and 36.
2 - The number will show as a binary value
8 - The number will show as an octal value
16 - The number will show as an hexadecimal value
Example Convert a number to a string, using different bases:
var num = 15;
var a = num.toString();
var b = num.toString(2);
var c = num.toString(8);
var d = num.toString(16);
The result of a,b,c, and d will be:
15
1111
17
f

Why is it that parseInt(8,3) == NaN and parseInt(16,3) == 1?

I'm reading this but I'm confused by what is written in the parseInt with a radix argument chapter
Why is it that parseInt(8, 3) → NaN and parseInt(16, 3) → 1?
AFAIK 8 and 16 are not base-3 numbers, so parseInt(16, 3) should return NaN too
This is something people trip over all the time, even when they know about it. :-) You're seeing this for the same reason parseInt("1abc") returns 1: parseInt stops at the first invalid character and returns whatever it has at that point. If there are no valid characters to parse, it returns NaN.
parseInt(8, 3) means "parse "8" in base 3" (note that it converts the number 8 to a string; details in the spec). But in base 3, the single-digit numbers are just 0, 1, and 2. It's like asking it to parse "9" in octal. Since there were no valid characters, you got NaN.
parseInt(16, 3) is asking it to parse "16" in base 3. Since it can parse the 1, it does, and then it stops at the 6 because it can't parse it. So it returns 1.
Since this question is getting a lot of attention and might rank highly in search results, here's a rundown of options for converting strings to numbers in JavaScript, with their various idiosyncracies and applications (lifted from another answer of mine here on SO):
parseInt(str[, radix]) - Converts as much of the beginning of the string as it can into a whole (integer) number, ignoring extra characters at the end. So parseInt("10x") is 10; the x is ignored. Supports an optional radix (number base) argument, so parseInt("15", 16) is 21 (15 in hex). If there's no radix, assumes decimal unless the string starts with 0x (or 0X), in which case it skips those and assumes hex. (Some browsers used to treat strings starting with 0 as octal; that behavior was never specified, and was specifically disallowed in the ES5 specification.) Returns NaN if no parseable digits are found.
parseFloat(str) - Like parseInt, but does floating-point numbers and only supports decimal. Again extra characters on the string are ignored, so parseFloat("10.5x") is 10.5 (the x is ignored). As only decimal is supported, parseFloat("0x15") is 0 (because parsing ends at the x). Returns NaN if no parseable digits are found.
Unary +, e.g. +str - (E.g., implicit conversion) Converts the entire string to a number using floating point and JavaScript's standard number notation (just digits and a decimal point = decimal; 0x prefix = hex; 0o prefix = octal [ES2015+]; some implementations extend it to treat a leading 0 as octal, but not in strict mode). +"10x" is NaN because the x is not ignored. +"10" is 10, +"10.5" is 10.5, +"0x15" is 21, +"0o10" is 8 [ES2015+]. Has a gotcha: +"" is 0, not NaN as you might expect.
Number(str) - Exactly like implicit conversion (e.g., like the unary + above), but slower on some implementations. (Not that it's likely to matter.)
For the same reason that
>> parseInt('1foobar',3)
<- 1
In the doc, parseInt takes a string. And
If string is not a string, then it is converted to a string
So 16, 8, or '1foobar' is first converted to string.
Then
If parseInt encounters a character that is not a numeral in the specified radix, it ignores it and all succeeding characters
Meaning it converts up to where it can. The 6, 8, and foobar are ignored, and only what is before is converted. If there is nothing, NaN is returned.
/***** Radix 3: Allowed numbers are [0,1,2] ********/
parseInt(4, 3); // NaN - We can't represent 4 using radix 3 [allowed - 0,1,2]
parseInt(3, 3); // NaN - We can't represent 3 using radix 3 [allowed - 0,1,2]
parseInt(2, 3); // 2 - yes we can !
parseInt(8, 3); // NaN - We can't represent 8 using radix 3 [allowed - 0,1,2]
parseInt(16, 3); // 1
//'16' => '1' (6 ignored because it not in [0,1,2])
/***** Radix 16: Allowed numbers/characters are [0-9,A-F] *****/
parseInt('FOX9', 16); // 15
//'FOX9' => 'F' => 15 (decimal value of 'F')
// all characters from 'O' to end will be ignored once it encounters the out of range'O'
// 'O' it is NOT in [0-9,A-F]
Some more examples:
parseInt('45', 13); // 57
// both 4 and 5 are allowed in Radix is 13 [0-9,A-C]
parseInt('1011', 2); // 11 (decimal NOT binary)
parseInt(7,8); // 7
// '7' => 7 in radix 8 [0 - 7]
parseInt(786,8); // 7
// '78' => '7' => 7 (8 & next any numbers are ignored bcos 8 is NOT in [0-7])
parseInt(76,8); // 62
// Both 7 & 6 are allowed '76' base 8 decimal conversion is 62 base 10

Javascript - Leading zero to a number converting the number to some different number. not getting why this happening?

A leading zero to some number converting the number to some unknown number format.
for example :
017 is getting converted to 15
037 is getting converted to 31
Also found that numbers having 8 0r 9 at end are remaining same
for example :
018 is 18
038 is 38
o59 is 59
one more thing that I found is
for each next range of 10 the difference between converted value and the actual value get incremented by 2
for example :
for range 00-09 difference is 0 i.e value of 07 will be 7, 04 will be 4
for range 010-019 difference is 2 value of 017 will be 15, 013 will be 11
for range 020-029 difference is 4 value of 027 will be 23, 021 will be 17
and so on..
here is a snipet for test http://jsfiddle.net/rajubera/BxQHF/
I am not getting why this is happening ?
Please help me how to get the correct decimal number from the number having leading zero ?
If there is a leading 0, it is converting it to octal (base 8) as long as its a valid number in base 8 (no numbers greater than 7).
For example:
017 in base 8 is 1 * 8 + 7 = 15
037 in base 8 is 3 * 8 + 7 = 31
018 is converted to 18 because 018 isn't a valid number in base 8
Note that the behavior as to which base the number is converted to by default can be browser-specific, so its important to always specify the base/radix when using parseInt:
parseInt("017",10) === 17
UPDATE based on comments:
parseInt expects a string as the first argument, so
parseInt("012",10) === 12
One of the reasons to "use strict";
(function() {"use strict"; 017})()
// Firefox => SyntaxError: "0"-prefixed octal literals and octal escape sequences are deprecated; for octal literals use the \"0o\" prefix instead
// Chrome, Node => SyntaxError: Octal literals are not allowed in strict mode.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Errors/Deprecated_octal

parseInt(null, 24) === 23... wait, what?

Alright, so I was messing around with parseInt to see how it handles values not yet initialized and I stumbled upon this gem. The below happens for any radix 24 or above.
parseInt(null, 24) === 23 // evaluates to true
I tested it in IE, Chrome and Firefox and they all alert true, so I'm thinking it must be in the specification somewhere. A quick Google search didn't give me any results so here I am, hoping someone can explain.
I remember listening to a Crockford speech where he was saying typeof null === "object" because of an oversight causing Object and Null to have a near identical type identifier in memory or something along those lines, but I can't find that video now.
Try it: http://jsfiddle.net/robert/txjwP/
Edit Correction: a higher radix returns different results, 32 returns 785077
Edit 2 From zzzzBov: [24...30]:23, 31:714695, 32:785077, 33:859935, 34:939407, 35:1023631, 36:1112745
tl;dr
Explain why parseInt(null, 24) === 23 is a true statement.
It's converting null to the string "null" and trying to convert it. For radixes 0 through 23, there are no numerals it can convert, so it returns NaN. At 24, "n", the 14th letter, is added to the numeral system. At 31, "u", the 21st letter, is added and the entire string can be decoded. At 37 on there is no longer any valid numeral set that can be generated and NaN is returned.
js> parseInt(null, 36)
1112745
>>> reduce(lambda x, y: x * 36 + y, [(string.digits + string.lowercase).index(x) for x in 'null'])
1112745
Mozilla tells us:
function parseInt converts its first
argument to a string, parses it, and
returns an integer or NaN. If not NaN,
the returned value will be the decimal
integer representation of the first
argument taken as a number in the
specified radix (base). For example, a
radix of 10 indicates to convert from
a decimal number, 8 octal, 16
hexadecimal, and so on. For radices
above 10, the letters of the alphabet
indicate numerals greater than 9. For
example, for hexadecimal numbers (base
16), A through F are used.
In the spec, 15.1.2.2/1 tells us that the conversion to string is performed using the built-in ToString, which (as per 9.8) yields "null" (not to be confused with toString, which would yield "[object Window]"!).
So, let's consider parseInt("null", 24).
Of course, this isn't a base-24 numeric string in entirety, but "n" is: it's decimal 23.
Now, parsing stops after the decimal 23 is pulled out, because "u" isn't found in the base-24 system:
If S contains any character that is
not a radix-R digit, then let Z be the
substring of S consisting of all
characters before the first such
character; otherwise, let Z be S. [15.1.2.2/11]
(And this is why parseInt(null, 23) (and lower radices) gives you NaN rather than 23: "n" is not in the base-23 system.)
Ignacio Vazquez-Abrams is correct, but lets see exactly how it works...
From 15.1.2.2 parseInt (string , radix):
When the parseInt function is called,
the following steps are taken:
Let inputString be ToString(string).
Let S be a newly created substring of inputString consisting of the first
character that is not a
StrWhiteSpaceChar and all characters
following that character. (In other
words, remove leading white space.)
Let sign be 1.
If S is not empty and the first character of S is a minus sign -, let
sign be −1.
If S is not empty and the first character of S is a plus sign + or a
minus sign -, then remove the first
character from S.
Let R = ToInt32(radix).
Let stripPrefix be true.
If R ≠ 0, then a. If R < 2 or R > 36, then return NaN. b. If R ≠ 16, let
stripPrefix be false.
Else, R = 0 a. Let R = 10.
If stripPrefix is true, then a. If the length of S is at least 2 and the
first two characters of S are either
“0x” or “0X”, then remove the first
two characters from S and let R = 16.
If S contains any character that is not a radix-R digit, then let Z be the
substring of S consisting of all
characters before the first such
character; otherwise, let Z be S.
If Z is empty, return NaN.
Let mathInt be the mathematical integer value that is represented by Z
in radix-R notation, using the letters
A-Z and a-z for digits with values 10
through 35. (However, if R is 10 and Z
contains more than 20 significant
digits, every significant digit after
the 20th may be replaced by a 0 digit,
at the option of the implementation;
and if R is not 2, 4, 8, 10, 16, or
32, then mathInt may be an
implementation-dependent approximation
to the mathematical integer value that
is represented by Z in radix-R
notation.)
Let number be the Number value for mathInt.
Return sign × number.
NOTE parseInt may interpret only a
leading portion of string as an
integer value; it ignores any
characters that cannot be interpreted
as part of the notation of an integer,
and no indication is given that any
such characters were ignored.
There are two important parts here. I bolded both of them. So first of all, we have to find out what the toString representation of null is. We need to look at Table 13 — ToString Conversions in section 9.8.0 for that information:
Great, so now we know that doing toString(null) internally yields a 'null' string. Great, but how exactly does it handle digits (characters) that aren't valid within the radix provided?
We look above to 15.1.2.2 and we see the following remark:
If S contains any character that is
not a radix-R digit, then let Z be the
substring of S consisting of all
characters before the first such
character; otherwise, let Z be S.
That means that we handle all digits PRIOR to the specified radix and ignore everything else.
Basically, doing parseInt(null, 23) is the same thing as parseInt('null', 23). The u causes the two l's to be ignored (even though they ARE part of the radix 23). Therefore, we only can only parse n, making the entire statement synonymous to parseInt('n', 23). :)
Either way, great question!
parseInt( null, 24 ) === 23
Is equivalent to
parseInt( String(null), 24 ) === 23
which is equivalent to
parseInt( "null", 24 ) === 23
The digits for base 24 are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e, f, ..., n.
The language spec says
If S contains any character that is not a radix-R digit, then let Z be the substring of S consisting of all characters before the first such character; otherwise, let Z be S.
which is the part that ensures that C-style integer literals like 15L parse properly,
so the above is equivalent to
parseInt( "n", 24 ) === 23
"n" is the 23-rd letter of the digit list above.
Q.E.D.
I guess null gets converted to a string "null". So n is actually 23 in 'base24' (same in 'base25'+), u is invalid in 'base24' so the rest of the string null will be ignored. That's why it outputs 23 until u will become valid in 'base31'.
parseInt uses alphanumeric representation, then in base-24 "n" is valid, but "u" is invalid character, then parseInt only parses the value "n"....
parseInt("n",24) -> 23
as an example, try with this:
alert(parseInt("3x", 24))
The result will be "3".

Categories

Resources