I have a number of seconds in a string, like: '5'.
From that I need to get the number of milliseconds and it has to be of type Number, like: 5000.
I know that you can easily convert strings to numbers by prefixing them with a +
const result = +'5';
console.log(result, typeof(result));
However playing around I saw that that's not even necessary because JavaScript automatically does the conversion for you when you try to use arithmetic between strings and numbers.
const result = '5' * 3;
console.log(result, typeof(result));
console.log('5.3' * 3);
On the docs I only found info about the Number() constructor.
My question is: is it safe to use arithmetic on strings (except for the addition)? Can I rely on the behaviour showed above?
Yes, it is safe. All arithmetic operations except a binary + will convert the operands to numbers. That includes bitwise operators as well as unary plus.
With that said, it is probably a good idea not to rely on this extensively. Imagine that you have this code:
function calculate(a, b) {
return a * 2 + b * 3;
}
//elsewhere in the code
console.log(calculate("5", "2"));
This works fine because both a and b are multiplied, so are going to be converted to numbers. But in six months time you come back to the project and realise you want to modify the calculation, so you change the function:
function calculate(a, b) {
return a + b * 3;
}
//elsewhere in the code
console.log(calculate("5", "2"));
...and suddenly the result is wrong.
It is therefore better if you explicitly convert the values to numbers if you want to do arithmetic. Saves the occasional accidental bug and it is more maintainable.
Yes, but you have to be careful...
console.log('5.3' * 3);
console.log('5.3' + 3);
These two very similar functions cast the values different ways:
* can only be applied between two numbers, so '5.3' becomes 5.3
+ can also concatenate strings, and the string comes first, so 3 becomes '3'
If you understand all these you can do this, but I'd recommend against it. It's very easy to miss and JS has a lot of weird unexpected casts.
Related
I want to find the sum of all digits of a large number, for example 9995.
I applied BigInt to get the power of large number and sum its digits together using the code below.
BigInt(Math.pow(99, 95)).toString().split("").reduce((a, b) => a * 1 + b * 1)
However, it returns 845, but the correct answer should be 972.
I have checked the integer output of the large number. In JavaScript it is:
3848960788934848488282452569509484590776195611314554049114673132510910096787679715604422673797115451807631980373077374162416714994207463122539142978709403811688831410945323915071533162168320
This is not the same as the correct answer (in C#):
3848960788934848611927795802824596789608451156087366034658627953530148126008534258032267383768627487094610968554286692697374726725853195657679460590239636893953692985541958490801973870359499.
I am wondering what’s wrong in my code causing the differences.
The expression Math.pow(99, 95), when it resolves, has already lost precision - casting it to a BigInt after the fact does not recover the lost precision.
Use BigInts from the beginning instead, and use ** instead of Math.pow so that the exponentiation works:
console.log(
(99n ** 95n)
.toString()
.split('')
.reduce((a, b) => a + Number(b), 0)
);
BigInt(Math.pow(99,95))
This runs math pow on 2 floats, then converts it to bigint.
You want BigInt(99) ** BigInt(95) instead
There are a few expressions that are commonly seen in JavaScript, but which some programming purists will tell you are never a good idea. What these expressions share is their reliance on automatic type conversion — a core feature of JavaScript which is both a strength and a weakness, depending on the circumstances and your point of view.
Type coercion and type conversion are similar except type coercion is when JavaScript automatically converts a value from one type to another (such as strings to numbers). It's also different in that it will decide how to coerce with its own set or rules. I found this example useful because it shows some interesting output behavior illustrating this coercive behavior:
const value1 = '5';
const value2 = 9;
let sum = value1 + value2;
console.log(sum);
In the above example, JavaScript has coerced the 9 from a number into a string and then concatenated the two values together, resulting in a string of 59. JavaScript had a choice between a string or a number and decided to use a string.
The compiler could have coerced the 5 into a number and returned a sum of 14, but it did not. To return this result, you'd have to explicitly convert the 5 to a number using the Number() method:
sum = Number(value1) + value2;
From an MDN glossary entry I wrote here: https://developer.mozilla.org/en-US/docs/Glossary/Type_coercion edited by chrisdavidmills
Does JavaScript support automatic type conversion?
Yes. It's usually called type coercion, but conversion is perfectly accurate.
For instance:
console.log("Example " + 42);
"automatically" converts 42 (a number) to string. I put "automatically" in quotes because it's done by the + operator, in a clearly-defined way.
Another example is that various operations expecting numbers will convert from string (or even from object). For instance:
const obj = {
valueOf() {
return 2;
}
};
const str = "10";
console.log(Math.max(obj, str)); // 10
console.log(Math.min(obj, str)); // 2
The rules JavaScript uses are clearly and completely defined in the specification. That doesn't prevent people from frequently being surprised by some of them, such as that +"" is 0.
function Round2DecimalPlaces(l_amt) {
var l_dblRounded = +(Math.round(l_amt + "e+2") + "e-2");
return l_dblRounded;
}
Fiddle: http://jsfiddle.net/1jf3ut3v/
I'm mainly confused on how Math.round works with "e+2" and how addition the "+" sign to the beginning of Math.round makes any difference at all.
I understand the basic of the function; the decimal gets moved n places to the right (as specified by e+2), rounded with this new integer, and then moved back. However, I'm not sure what 'e' is doing in this situation.
eX is a valid part of a Number literal and means *10^X, just like in scientific notation:
> 1e1 // 1 * Math.pow(10, 1)
10
> 1e2 // 1 * Math.pow(10, 2)
100
And because of that, converting a string containing such a character sequence results in a valid number:
> var x = 2;
> Number(x + "e1")
20
> Number(x + "e2")
200
For more information, have a look at the MDN JavaScript Guide.
But of course the way this notation is used in your example is horrible. Converting values back and forth to numbers and strings is already bad enough, but it also makes it more difficult to understand.
Simple multiple or divide by a multiple of 10.
The single plus operator coerces a the string into a float. (See also: Single plus operator in javascript )
7 and 10 in the expression (7/10) are integers, so the result 0.7 should be integer as well, which is 0, and the result for the entire expression should be 0 too. However, it's giving me the result of 7, why? Is it ignoring the parentheses or converts to double automatically?
JavaScript doesn't distinguish between integers and floating point numbers, everything I believe is considered a double so that is just why you get the result.
Take a look at the details on the Number property on MDN.
JavaScript doesn't have an integer type, or a double, or a float... it just has 1 type for all numbers: the helpfuly called Number type (try var foo = new Number(7);, or var foo = Number('123string');
Now, I know I said that JS doesn't know of floats, but that's not entirely true. All Number type vars/values are, essentially 64 bit floats, as defined by the IEEE 754 standard (which are, indeed, as Jan Dvorak kindly pointed out to me, double's in most staticly typed languages), with all the caveats that brings with it:
(.1 + .2);//0.30000000000000004
But that's not the point. The point is that, in JS you can perform float + int arithmatic without there ever being a need for internal casts, or conversions. That's why 10*(7/10) will always be 7
There is no int and double in JavaScript
In JavaScript, both int, flot, and double are normalized to work together. They are treated as 1 (They're treated as as Number, which is an IEEE 754 float. Thanks #Elias Van Ootegem). Equality, Liberty and Fraternity. and thus;
10*0.7 = 7
JavaScript is not like C.
Javascript doesn't have integers, and even if it did, there's nothing that says that / needs to return an integer (just because another language may do that doesn't mean every language has to). The operation results in a float/Number, just like all Javascript numbers are, period.
try this
10*parseInt(7/10)
hope this will help you
If you try to follow the rules, then
10 * (7/10) --> 10 * .7 --> 7
You cannot change the way its gonna result into.
so the result 0.7 should be integer as well, which is 0
If you want this, then try using
Math.Floor();
This would change the decimals to the nearest int! Or try out parse()
JavaScript uses dynamic types. That means that a variable like this:
var str = "hi";
Can later become:
str = 123; //now we have an 'int'
str += 0.35; //now str is 123.35, a 'float'
So JavaScript doesn't cast floats to ints for example.
If you want to force a "cast" then you have to do:
var integer = parseInt( 3.14*9.0291+23, 10 ); //the second parameter (10) is the 'base'
But remember, Javascript will not take care of types, that's your problem.
I'm trying to implement a BigInt type in JavaScript using an array of integers. For now each one has an upper-bound of 256. I've finished implementing all integer operations, but I can't figure out how to convert the BigInt to its string representation. Of course, the simple way is this:
BigInt.prototype.toString = function(base) {
var s = '', total = 0, i, conv = [
,,
'01',
'012',
'0123',
'01234',
'012345',
'0123456',
'01234567',
'012345678',
'0123456789',
,
,
,
,
,
'0123456789abcdef'
];
base = base || 10;
for(i = this.bytes.length - 1; i >= 0; i--) {
total += this.bytes[i] * Math.pow(BigInt.ByteMax, this.bytes.length - 1 - i);
}
while(total) {
s = conv[base].charAt(total % base) + s;
total = Math.floor(total / base);
}
return s || '0';
};
But when the BigInts actually get big, I won't be able to convert by adding anymore. How can I convert an array of base-x to an array of base-y?
See the example I gave in this answer to a similar question recently (it's for base-10 to base-3, but the principle should be transferrable): C Fast base convert from decimal to ternary.
In summary:
Iterate over the input
digits, from low to high. For each
digit position, first calculate what
1000....000 (base-256) would be in the output representation (it's 256x the previous
power of 256). Then multiply that
result by the digit, and accumulate
into the output representation.
You will need routines that perform
multiplication and addition in the
output representation. The
multiplication routine can be written
in terms of the addition routine.
Note that I make no claims that this approach is in any way fast (I think it's O(n^2) in the number of digits); I'm sure there are algorithmically faster approaches than this.
If you're prepared to put on your math thinking cap more than I am right now, someone seems to have explained how to convert digit representations using Pascal's triangle:
http://home.ccil.org/~remlaps/DispConWeb/index.html
There are links to the source code near the bottom. They're in Java rather than JavaScript, but if you're putting in the effort to grok the math, you can probably come up with your own implementation or put in the effort to port the code...