I was doing a coding test today, the goal was to catch all edge cases when adding 2 integer representations of strings in JavaScript. One case I could not get was how to detect overflow/underflow for the sum stored in the IEEE 754 numeric.
Normally, in C, I'd look at the binary representation of the numeric, but in JavaScript, I can only look at 32 bits worth of the integer value.
Here's the code I had:
function string_add(a, b) {
if (arguments.length !== 2)
throw new Error('two arguments expected as input');
// ensure we have strings
if (typeof a !== 'string' || typeof b !== 'string')
throw new Error('bad parameter types');
// ensure we do not have empty strings
if (a.length === 0 || b.length === 0)
throw new Error('an empty string is an invalid argument');
// ensure we have integer arguments
if (0 !== (+a % 1) || 0 !== (+b % 1))
throw new Error('expected numeric integer strings for arguments');
var sum = +a + +b; // create numeric sum of a and b.
sum+=''; // convert numeric sum to string
return sum;
}
Thanks in advance.
Edit: JavaScript now has a Number.MAX_SAFE_INTEGER
Actually, integers in Javascript are 53 bits of information due to the way floating point math works.
The last time I needed to do something similar I did...
var MAX_INT = Math.pow(2, 53);
var MIN_INT = -MAX_INT;
var value = MAX_INT * 5;
if (value >= MAX_INT) {
alert("Overflow");
}
// Note. you have to use MAX_INT itself as the overflow mark because of this:
value = MAX_INT+1;
if (value > MAX_INT) {
alert("Overflow test failed");
}
EDIT After thinking about it, it would be easier to say:
var MAX_INT = Math.pow(2, 53) -1;
var MIN_INT = -MAX_INT;
since that is the largest INT that you know hasn't overflowed.
Related
I was doing following leetcode question
Implement atoi which converts a string to an integer.
The function first discards as many whitespace characters as necessary
until the first non-whitespace character is found. Then, starting from
this character, takes an optional initial plus or minus sign followed
by as many numerical digits as possible, and interprets them as a
numerical value.
The string can contain additional characters after those that form the
integral number, which are ignored and have no effect on the behavior
of this function.
If the first sequence of non-whitespace characters in str is not a
valid integral number, or if no such sequence exists because either
str is empty or it contains only whitespace characters, no conversion
is performed.
If no valid conversion could be performed, a zero value is returned.
Note:
Only the space character ' ' is considered as whitespace character.
Assume we are dealing with an environment which could only store
integers within the 32-bit signed integer range: [−231, 231 − 1]. If
the numerical value is out of the range of representable values,
INT_MAX (231 − 1) or INT_MIN (−231) is returned.
Question link: https://leetcode.com/problems/string-to-integer-atoi/
Here for this input "-91283472332", I am not sure why do they expect the following output -2147483648 instead of -91283472332
Not sure, If this relevant but this is my code
/**
* #param {string} str
* #return {number}
*/
var myAtoi = function(str) {
let i = 0
let output = ''
let nonWhiteCharacter = false
while (i<str.length) {
const char = str[i]
if (!char == " ") {
if (char.toLowerCase() === char.toUpperCase()) {
if (!nonWhiteCharacter) nonWhiteCharacter = true
output = output + char
}
if (!nonWhiteCharacter) return 0
}
i++
}
return output === null ? 0 : parseInt(output)
}
I am not sure why do they expect the following output -2147483648 instead of -91283472332
Because:
Assume we are dealing with an environment which could only store integers within the 32-bit signed integer range: [−231, 231 − 1]. If the numerical value is out of the range of representable values, INT_MAX (231 − 1) or INT_MIN (−231) is returned.
So if the extracted number is larger than 2 ** 31 - 1, the returned number should be 2 ** 31 - 1 instead.
Similarly, if the extracted number is smaller than -(2 ** 31), instead return -(2 ** 31).
This would probably be easier with a regular expression:
const myAtoi = (str) => {
const match = str.match(/^ *([+-]?\d+)/);
if (!match) return;
const num = Number(match[1]);
return Math.max(
Math.min(2 ** 31 - 1, num),
-(2 ** 31)
);
};
console.log(
myAtoi(' 123'),
myAtoi('-456'),
myAtoi('-9999999999999'),
myAtoi('9999999999999')
);
Have a look it this solution. As we are dealing with integers, they only hold 32-bit. 2^31 < x <= -2^31. So here we use a try catch as the 32bit integer can raise NumberFormat Exception which will be caught by the catch block.
class Solution {
public int myAtoi(String str) {
int flag=0, sign = 0;
int n = str.length();
StringBuilder st = new StringBuilder();
int i =0;
//clear white spaces
while(i<n && str.charAt(i) == ' '){
++i;
}
//overflow of string
if (i>=n){
return 0;
}
//checking sign and not allowing more than one sign, will return 0 if there is ++,+-,--
while(i<n && (str.charAt(i) == '+' || str.charAt(i) == '-')){
if (sign >= 1){
return 0;
}
else{
st.append((str.charAt(i++) == '+') ? '+': '-');
sign++;
}
}
//checking if character is digit
while(i<n && Character.isDigit(str.charAt(i))){
st.append(str.charAt(i++));
flag = 1;
}
//return 0 if no digits
if(flag == 0)
return 0;
//to check if the number is within the int range
try{
return Integer.parseInt(st.toString());
}
catch(NumberFormatException e){
return (st.charAt(0) == '-') ? Integer.MIN_VALUE : Integer.MAX_VALUE;
}
}
}
Pretty simple, but I cannot understand why this is happening, if this isn't normal behavior, I assume it must be a bugg in my code, but since I checked everything and it seems to be right, I think it must be something parseInt is doing.
My code:
$(array).each(function(key, value){
//ignore the if statement, it's about the else if statement
if (isNaN(value) && poppin === 'h') {
console.log(reverseArray);
} else if (typeof parseInt(value) === 'number') {
console.log(value);
numbers++;
} else {
console.log('this else statement is pretty useless');
}
});
now, if the array were to be:
let array = ['f', '1', '2', '3'];
it'll end up in the else if statement, even though (I assume) F is not a number. I read about parseInt() and if I give the second parameter an argument it could (and would) calculate F to a number such as: parseInt('F', 16);
My question: Why is f considered an integer in the context of my code?
F is a hexadecimal. The second parameter is the type of sequence of numbers. For example, parseInt(5, 2); will return 101 because its returns the number in binary (2 numbers, 1 and 0). The second argument can be 2 (Binary), 8 (Octal) and 16 (Hexadecimal). The reason your parseInt('F', 16); returns a number is because there is 16 numbers in that sequence (or 10 numbers, 0-9 and 5 letters representing numbers.A = 10, B = 11, C = 12, D = 13, E = 14, F = 15).
var number = 'F';
console.log(parseInt(number, 2)); // F isn't part of sequence (0,1)
console.log(parseInt(number, 8)); // F isn't part of sequence (0-7)
console.log(parseInt(number, 16)); // F is 15, as defined in answer
Therefore, F is not considered a number in any other set apart from 16. And the typeOf which returns NaN is not equal to number. Therefore, it should run perfectly if the condition poppin === 'h' was removed (Thats where your problem is).
var nums = ['F', 20];
for (n of nums) {
// Your code being run with two different values
if (isNaN(n)) {
console.log("Not a number");
} else if (typeof parseInt(n) === 'number') {
console.log("A number");
}
}
So I need to write a script that validates a number that is great than zero and less than 100. The catch is that the number can only be accepted if there is decimal in the middle position and has at least two decimal places.
Examples: 19.30 would validate but 9.3, 9.30, and 19.3 would be considered invalid.
I'm thinking a regular expressions would be the best way to validate the decimal criteria?
Comments in the code:
function validNumber(string) {
// parse string into number
let number = parseFloat(string, 10);
// check if number is in range
if (number <= 0 || number >= 100) return false;
// check if number is formatted correctly
if (string !== number.toFixed(2)) return false;
// return true if all conditions pass
return true;
}
console.log(validNumber("19.30")); // true
console.log(validNumber("9.3")); // false
console.log(validNumber("19.3")); // false
console.log(validNumber("100.30")); // false
console.log(validNumber("1.00")); // true
What you could do is split on the decimal, then the test the lengths of the strings.
function validate(number) {
let [whole, decimal] = number.toString().split('.', 2)
let int = parseInt(whole)
return whole.length == decimal.length && decimal.length >= 2
&& int > 0 && int < 100
}
console.log(validate('19.30'))
console.log(validate('9.3'))
console.log(validate('9.30'))
console.log(validate('-9.30'))
console.log(validate('19.3'))
console.log(validate('99.99'))
console.log(validate('1.111'))
console.log(validate('100.111'))
console.log(validate('1000.111'))
The following regex meets your needs I think provided input is string along with comparison operator
\d{1,3}\.\d{2}+
You could use it as following:
const isValid = (input) => {
const num = parseFloat(input, 10);
return (!!input.match(/\d{1,3}\.\d{2}+/) && num > 0 && num < 100);
};
isValid('19.3') // => false
isValid('19.30') // => true
I'm trying to pull out some strings from a website (Pinterest, to be specific), and sort out the strings. The problem is that the strings contain both numbers and text(k for thousands and m for millions).
list.sort(function(a, b) {
var compA = Number( $(a).find('.repinCountSmall').text().trim().replace(/[^0-9]/g, '') );
var compB = Number( $(b).find('.repinCountSmall').text().trim().replace(/[^0-9]/g, '') );
return (compA == compB) ? 0 : (compA > compB) ? -1 : 1;
});
With the above code, if I have .repinCountSmall list function that provides the following:
1
250
999
1k
1.7k
17.3k
1.2m
The problem with the current function is that it strips "k" in 1.7k and arrives at 17. Similarly 17.3k, and 1.2m is treated as 173 and 12 respectively. I want the numbers ending with k multiplied by 1000 first, and string ending with m to be multiplied with 1000000 respectively. The list should sorted after this conversion.
Any solutions? Thanks.
You could use a conversion function something like this (obviously you could add some additional else if statements if you had to allow for multipliers other than "k" and "m"):
function toNumber(s) {
s = s.replace(/[^\dkm.]/g,"");
var u = s.slice(-1);
if (u === "k")
return s.slice(0,-1) * 1000;
else if (u === "m")
return s.slice(0,-1) * 1000000;
return +s;
}
And then reference that from within your .sort() comparator function:
list.sort(function(a,b) {
var n1 = toNumber($(a).find('.repinCountSmall').text());
var n2 = toNumber($(b).find('.repinCountSmall').text());
return n1 - n2;
});
Note also that you don't need nested ternary operators to compare the resulting two numbers: because they are numbers you can just return the result of subtracting one from the other.
I've written a version of Y that automatically caches old values in a closure using memoization.
var Y = function (f, cache) {
cache = cache || {};
return function (x) {
if (x in cache) return cache[x];
var result = f(function (n) {
return Y(f, cache)(n);
})(x);
return cache[x] = result;
};
};
Now, when almostFibonacci (defined below) is passed into the above function, it returns the value of a large Fibonacci number comfortably.
var almostFibonacci = function (f) {
return function (n) {
return n === '0' || n === '1' ? n : f(n - 1) + f(n - 2);
};
};
However, after a certain value (Number.MAX_SAFE_INTEGER), integers in JavaScript (owing to their IEEE-754 double precision format) are not accurate. So, considering the fact that the only mathematical operations in the Fibonacci function above are addition and subtraction and since operators cannot be overloaded in JavaScript, I wrote naïve implementations of the sum and difference functions (that both use strings to support big integers) which are as follows.
String.prototype.reverse = function () {
return this.split('').reverse().join('');
};
var difference = function (first, second) {
first = first.reverse();
second = second.reverse();
var firstDigit,
secondDigit,
differenceDigits = [],
differenceDigit,
carry = 0,
index = 0;
while (index < first.length || index < second.length || carry !== 0) {
firstDigit = index < first.length ? parseInt(first[index], 10) : 0;
secondDigit = index < second.length ? parseInt(second[index], 10) : 0;
differenceDigit = firstDigit - secondDigit - carry;
differenceDigits.push((differenceDigit + (differenceDigit < 0 ? 10 : 0)).toString());
carry = differenceDigit < 0 ? 1 : 0;
index++;
}
differenceDigits.reverse();
while (differenceDigits[0] === '0') differenceDigits.shift();
return differenceDigits.join('');
};
var sum = function (first, second) {
first = first.reverse();
second = second.reverse();
var firstDigit,
secondDigit,
sumDigits = [],
sumDigit,
carry = 0,
index = 0;
while (index < first.length || index < second.length || carry !== 0) {
firstDigit = index < first.length ? parseInt(first[index], 10) : 0;
secondDigit = index < second.length ? parseInt(second[index], 10) : 0;
sumDigit = firstDigit + secondDigit + carry;
sumDigits.push((sumDigit % 10).toString());
carry = sumDigit > 9 ? 1 : 0;
index++;
}
sumDigits.reverse();
while (sumDigits[0] === '0') sumDigits.shift();
return sumDigits.join('');
};
Now, by themselves, both these functions work perfectly.1
I have now updated the almostFibonacci function to as follows to use the sum function instead of + and the difference function instead of the - operator.
var almostFibonacci = function (f) {
return function (n) {
return n === '0' || n === '1' ? n : sum(f(difference(n, '1')), f(difference(n, '2')));
};
};
As you may have guessed, this does work. It crashes the fiddle in case of even a small number like 10.
Question: What could be wrong? All the functions here work perfectly individually. But in tandem, they seem to fail. Can anyone here help me debug this particularly complex scenario?
1Except an edge case for the difference function. It requires the first argument to be larger than the second.
Now, by themselves, both these functions work perfectly - Except an edge case for the difference function. It requires the first argument to be larger than the second.
And that's the problem. In your fibonacci algorithm you're at some point calculating difference("2", "2"), which needs to yield "0" to work. It does however return the empty string "", which is not tested against as your guard condition for the recursion. When in the next step computing difference("", "1"), the function will fall into an infinite loop.
Solutions:
Fix that edge case (you still won't need to cope with negative numbers)
Don't use strings for the ordinal number, but only for the fibonacci number itself. You hardly will try the compute the (253+1)th fibonacci number, will you? I would assume this to be a significant speed improvement as well.
var fibonacci = Y(function(fib) {
return function(n) {
if (n == 0) return "0";
if (n == 1) return "1";
return sum(fib(n-1), fib(n-2));
};
});
Here is how I solved the problem at hand.
Changes:
I removed the while (differenceDigits[0] === '0') differenceDigits.shift(); statement. Even though this outputs differences without truncated leading zeros, it outputs a '0' in case of an edge case like difference('2', '2').
I edited the return statement in the almostFibonacci function to return n == 0 || n == 1 ? n : sum(f(difference(n, '1')), f(difference(n, '2')));. Notice that I'm checking for 0 and not '0' with a non strict equality operator.1
1The reason I'm doing n == 0 as opposed to n === '0' is because in JavaScript, '00000' == 0 but '00000' !== '0' and in my new updated difference function, without truncated leading zeros, I can't guarantee the number of zeros for a zero output. Well, actually I can. There would be as many zeros as the length of n.
100th Fibonacci - JSFiddle