Less than and Greater than producing unexpected results. - javascript

I'm new to Javascript, and I'm writing a Chrome extension which manipulates the Chrome Omnibox.
I have the following code implemented:
chrome.omnibox.onInputEntered.addListener(
function(text) {
console.log('inputEntered: ' + text);
if (text < 07000000){
chrome.tabs.create({url:"PRIVATEURL1"+ text});
}
if (text > 07000000){
chrome.tabs.create({url:"PRIVATEURL2"+ text});
}
});
Currently, it behaves like this:
Enter 07000001 & be brought to PRIVATEURL2.
Enter 00600000 & be brought to PRIVATEURL1.
Enter 1 & be brought to PRIVATEURL1.
All as expected.
However, enter:
04542226 & be brought to PRIVATEURL2.
06000001 & be brought to PRIVATEURL2.
I don't understand, is 04542226 not < 07000000 and therefore I should be brought to PRIVATEURL1?

The answer is rather easy.
Your text is, as evident, a string. Say, "07000001". When put in an arithmetical comparison, it's cast to a number. Which number? Let's see:
> text
"07000001"
> Number(text)
7000001
Now, that's correct. What is not is your number literal, 07000000
It is more or less known that you can write a hexadecimal number as a literal using the 0x notation:
> 0xFF
255
What is less known is that 0 prefix is a literal notation for octal numbers. 7000000 in base 8 is the number 1835008:
> 07000000
1835008
So, you should use 7000000 literal instead:
> "04542226" < 07000000
false
> "04542226" < 7000000
true
Curiously, parseInt/Number will process hexadecimal notation, but as we've seen it ignores leading zeroes. This is set by ECMAScript 5 standard.
Note that you can (and should) pass a second argument, radix, to the function if you expect a certain base.

Related

Why is parseFloat('1/2') == 1 in Javascript

I am unable to understand this:
parseFloat('1/2') == 1 Not Expected
parseFloat(1/2) == 0.5 Expected
parseFloat('0.5') == 0.5 Expected
parseFloat(0.5) == 0.5 Expected
Is it some issue or am I doing something wrong? Also, how to get
parseFloat('1/2') == 0.5
As in doc mentioned parseFloat
parseFloat parses its argument, and returns a floating point number. If it encounters a character other than a sign (+ or -), numeral (0-9), a decimal point, or an exponent, it returns the value up to that point and ignores that character and all succeeding characters. Leading and trailing spaces are allowed.
so 1/2 treated as a string.
Not only that - this string does not contain a valid number representation in JavaScript.
Numbers in JavaScript may include -, 0-9, . and +e.
/ is not a part of it. Therefore - parseFloat parses all the characters that are legal as a number - which in your case is just the 1 part, and ignores rest.
1/2 in JavaScript is not a number, but an expression including 2 numbers and an operator (1 = num, / = operator, 2 = number). What can execute expressions?
You can use eval to calculate fractional form.
console.log(eval('2/3'))
Mind that eval is a dangerous function: using eval on user-input can lead to exploits.
parseFloat does not understand the / character as a division nor does it do an eval of the string input.
It simply stops looking when it encounters the character it doesn't understand and returns the correctly parsed first part:
console.log(
parseFloat("1/2"), // 1
parseFloat("3/2"), // 3
parseFloat("1kahsdjfjhksd2") // 1
)
If you do want to evaluate the string "1/2" to the number 0.5, you can use eval. Be careful, because using eval can be a security risk, slow and hard to debug.
console.log(
eval("1/2")
);
Not 100% sure. but if you play around with parseFloat a bit you will see that it tries to convert every number it finds to a float, but stops as soon as there is a unexpected value so :
parseFloat('1/asdf') == 1
but
parseFloat('0.5') == 0.5
So parse float does not calculate for you, but just parses every number it finds, until there is something non numerical.
Your parsing a string that will be converted to 1. If your string was only numbers (e.g. "0.5") them they would be converted correctly, but as it includes the '/', the automatic type conversion will not occur and it will remain as a string. When using numbers the expected behavior occurs, that is:
parseFloat(1/2) === 0.5 // true

Number base conversion with exception handling

I have a function that converts a string representation of number of any valid number base and its radix. How do I correctly handle invalid numbers (like using A-K chars in bases < 11)? In invalid cases, I would like to return -1.
So far, I was able to achieve some degree of success with isNan() check, but it breaks on decimal base (convert("5A6E", 10)).
My code so far:
function convert(strNumber, radix) {
a = parseInt(strNumber, radix)
if(isNaN(a)){
return -1
}
else {
return a
}
}
In your breakage example "5A6E" you get back 5 because that's how parseInt works - see the examples in the documentation:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt
At the bottom of the above page, you will find a section titled "A stricter parse function" it appears to do what you are looking for using RegEx.
Update: In thinking about this further, the "Stricker parse function" is only going to work for base 10. To be more flexible you should add a function that looks at the radix, and based on that, checks for invalid characters in strNumber, returning -1 if any are found and calling parseInt if not.
For instance, if radix = 2, all characters except 0 and 1 are invalid. If it's 11, all characters but 0-9 and 'a' are invalid.
Tedious, but it will do what you want.

JavaScript - Preventing octal conversion

I'm taking a numerical input as an argument and was just trying to account for leading zeroes. But it seems javascript converts the number into octal before I can do anything to the number. The only way to work around it so far is if I pass the number as a string initially but I was hoping there'd be another way to convert it after it is passed? So far tried (using 017 which alerted me to the octal behaviour):
017.toString(10) // 15
parseInt(017,10) // 15
017 + "" //15
new Number(017) //15
new Number('017') //17
parseInt('017', 10) // 17
So given
function(numb) {
if (typeof numb === number) {
// remove leading zeroes and convert to decimal
}
else {
// use parseInt
}
}
'use strict' also doesn't seem to solve this as some older posts have suggested. Any ideas?
If you take "numerical input", you should always definitely guaranteed have a string. There's no input method in this context that I know that returns a Number. Since you receive a string, parseInt(.., 10) will always be sufficient. 017 is only interpreted as octal if written literally as such in source code (or when missing the radix parameter to parseInt).
If for whatever bizarre reason you do end up with a decimal interpreted as octal and you want to reverse-convert the value back to a decimal, it's pretty simple: express the value in octal and re-interpret that as decimal:
var oct = 017; // 15
parseInt(oct.toString(8), 10) // 17
Though because you probably won't know whether the input was or wasn't interpreted as octal originally, this isn't something you should have to do ever.
JavaScript interprets all numbers beginning with a 0, and containing all octal numerals as octals - eg 017 would be an octal but 019 wouldn't be. If you want your number as a decimal then either
1. Omit the leading 0.
2. Carry on using parseInt().
The reason being is that JavaScript uses a few implicit conversions and it picks the most likely case based on the number. It was decided in JavaScript that a leading 0 was the signal that a number is an octal. If you need that leading 0 then you have to accept that rule and use parseInt().
Source
If you type numbers by hand to script then not use leading zeros (which implicity treat number as octal if it is valid octal - if not then treat it as decimal). If you have number as string then just use + operator to cast to (decimal) number.
console.log(+"017")
if (021 < 019) console.log('Paradox');
The strict mode will not allow to use zero prefix
'use strict'
if (021 < 019) console.log('Paradox');

Is there a way to distinguish integers from very near decimals in Javascript?

Look at those evaluations (actual dump from node 0.10.33)
> parseFloat(2.1e-17) === parseInt(2.1e-17)
false
> parseFloat(2.1e-17 + 2) === parseInt(2.1e-17 + 2)
true
> parseFloat(2.000000000000000000000000000000000009) === parseInt(2.00000000000000000000000000000000000009)
true
How can I tell integers from decimals very near to integers?
It seems that JS (or at least V8) doesn't care about digits smaller than 10^-16 when doing calculations, even if the 64bit representation used by the language (reference) should handle it.
Your examples are pretty much straight forward to explain. First thing to note is, that parseInt() and parseFloat() take a string as an input. So you inputs first get converted to string, before actually getting parsed.
The first is easy to see:
> parseFloat(2.1e-17) === parseInt(2.1e-17)
false
// look at the result of each side
parseFloat(2.1e-17) == 2.1e-17
parseInt(2.1e-17) == 2
When parsing the string "2.1e-17" as integer, the parse will stop at the dot as that is no valid digit and return everything it found until then, which is just 2.
> parseFloat(2.1e-17 + 2) === parseInt(2.1e-17 + 2)
true
// look at the result of each side
parseFloat(2.1e-17 + 2) == 2
parseInt(2.1e-17 + 2) == 2
Here the formula in the parameter will be evaluated first. Due to the limitations of floating point math (we just have 52bit for the mantissa and can't represent something like 2.000000000000000021), this will result in just 2. So both parseX() function get the same integer parameter, which will result in the same parsed number.
> parseFloat(2.000000000000000000000000000000000009) === parseInt(2.00000000000000000000000000000000000009)
true
Same argument as for the second case. The only difference is, that instead of a formula, that gets evaluated, this time it is the JavaScript parser, which converts your numbers just to 2.
So to sum up: From JavaScript's point of view, your numbers are just the same. If you need more precision, you will have to use some library for arbitrary precision.
This is something I learned from ReSharper
instead of using expressions like
if (2.00001 == 2) {}
try
if (Math.abs(2.00001 - 2) < tolerance) {}
where tolerance should be an aceptable value for you for example .001
so all values wich difference is less than .001 will be equals
Do you really need 10^-16 precision I mean that is why 1000 meter = 1 kilometer, just change the unit of the output so you dont have to work with all those decimals

Preventing concatenation

I've been writing JavaScript on and off for 13 years, but I sort of rediscovered it in the past few months as a way of writing programs that can be used by anyone visiting a web page without installing anything. See for example.
The showstopper I've recently discovered is that because JavaScript is loosely typed by design, it keeps concatenating strings when I want it to add numbers. And it's unpredictable. One routine worked fine for several days then when I fed different data into it the problem hit and I ended up with an impossibly big number.
Sometimes I've had luck preventing this by putting ( ) around one term, sometimes I've had to resort to parseInt() or parseFloat() on one term. It reminds me a little of trying to force a float result in C by putting a .00 on one (constant) term. I just had it happen when trying to += something from an array that I was already loading by doing parseFloat() on everything.
Does this only happen in addition? If I use parseInt() or parseFloat() on at least one of the terms each time I add, will that prevent it? I'm using Firefox 6 under Linux to write with, but portability across browsers is also a concern.
The specification says about the addition operator:
If Type(lprim) is String or Type(rprim) is String, then
Return the String that is the result of concatenating ToString(lprim) followed by ToString(rprim)
Which means that if at least one operator is a string, string concatenation will take place.
If I use parseInt() or parseFloat() on at least one of the terms each time I add, will that prevent it?
No, all operands have to be numbers.
You can easily convert any numerical string into a number using the unary plus operator (it won't change the value if you are already dealing with a number):
var c = +a + +b;
I normally do this:
var x = 2;
var t = "12";
var q = t+x; // q == "122"
var w = t*1+x; // *1 forces conversion to number w == 14
If t isn't a number then you'll get NaN.
If you multiply by 1 variables you don't know what type they are. They will be converted to a number. I find this method better than doing int and float casts, because *1 works with every kind of numbers.
The problem you are having is that the functions which fetch values from the DOM normally return strings. And even if it is a number it will be represented as a string when you fetch it.
You can use + operator to convert a string to number.
var x = '111'
+x === 111
Rest assured it is very predictable, you just need to be familiar with the operators and the data types of your input.
In short, evaluation is left-to-right, and concatenation will occur whenever in the presence of a string, no matter what side of the operation.
So for example:
9 + 9 // 18
9 + '9' // '99'
'9' + 9 // '99'
+ '9' + 9 // 18 - unary plus
- '9' + 9 // 0 - unary minus
Some ternary expressions:
9 + '9' + 9 // '999'
9 + 9 + '9' // '189'

Categories

Resources