Save integers as floats [duplicate] - javascript

This question already has answers here:
Save integer as float
(2 answers)
Closed 8 years ago.
function prec(numb){
var numb_string = numb.toString().split('.')
return numb_string[(numb_string.length - 1)].length
}
function randy(minimum, maximum) {
var most_accurate = Math.max ( prec(minimum), prec(maximum) );
return ( ( Math.random() * ( maximum - minimum ) + minimum ).toFixed( most_accurate ) );
}
// returns random numbers between these points. 1 decimal place of precision:
console.log( randy(2.4,4.4) );
// returns random numbers between these points. 3 decimal places of precision:
console.log( randy(2.443,4.445) );
// returns random numbers between these points. Want 3 decimal places of precision. However, get 0:
console.log( randy(2.000,4.000) );
// Why do I get 0 decimal places? Because floats are rounded into integers automatically:
console.log( 4.0 ); // want 4.0 to be logged. Instead I get '4'
You don't need to read how the functions work. Just the console logs.
Basically, I need to return a random number between two points to a degree of precision. The precision is automatically derived from the most precise float passed to the randy function.
This works fine when the number range is 3.5 3.7 or 34.4322 800.3233 but not 2.0, 3.0 or 4.0000, 5.0000
Then the number is appears to be automatically saved as an integer:
console.log( 2.0 ) //=> 2
I want to extend the Number prototype so that 2.0 is saved as 2.0 so that this function can find the precision:
function prec(numb){
var numb_string = numb.toString().split('.')
return numb_string[(numb_string.length - 1)].length
}
It currently thinks that 3.000000000 has a precision of 0 decimal places because if 3E8 is passed in as the numb parameter, it's read as 3. I want it read as 3.000000000
While I can do this randy(2.toFixed(3),3.toFixed(3)) it gets unreadable and it would be undeniably nicer to do this for smaller precisions: randy(2.000,3.000).
Is this possible?
Fiddle

There is only one number type in JS.
Aside from shortfalls of the type itself (causing headaches in other languages as well), it's a good thing.
If you want to display precision, then use num.toFixed(n); to store the number as a string, rounded to the precision you requested.
You can parse the string later in your code, operate on it, and then call .toFixed(n); on the result, to perpetuate the precision...
But unless you have specific needs, or are lumping several pieces of code together, are you not going to be concerned with rounding inaccuracies, versus just operating on full-precision values, and then rounding/formatting the end results?
Of course there are plenty of other solutions...
...keep track of the mandated precision with an int, representing the value... ...or keep an int representing the floating value as an int, based on preferred precision... 1.235 becomes [1, 235].
...anything is doable.
Subclassing, though, is really not going to be the answer.

you can define a class that helps you solve the problem especially with the toSting function
function NewNumber()
{
this.value = (typeof(arguments[0]) == "number") ? arguments[0] : 0;
this.decimal = (typeof(arguments[1]) == "number") ? arguments[1] : 0;
this.Val = function()
{
return parseFloat(this.value.toFixed(this.decimal));
}
this.toString = function()
{
return (this.value.toFixed(this.decimal)).toString();
}
}
Create a number like this
var Num = NewNumber(4.123545,3);
// first argument is the value
// and second one is decimal
To get the value of your variable, you should use the function Val like this
console.log(Num.Val()); // this one prints 4.123 on your console
Then the toString function
Num.toString() // it returns "4.123"
(new NewNumber(4,4)).toString(); // it returns "4.0000"
in your functions use the toString of the NewNumber class to solve your problem

Related

How to properly deal with Javascript float representation errors? [duplicate]

This question already has answers here:
How to deal with floating point number precision in JavaScript?
(47 answers)
Closed 1 year ago.
For example:
sum = 0.00;
sum += 46.85 * 0.1;
console.log(sum) // 4.6850000000000005
sum += 179.29 * 0.1;
console.log(sum) // 22.613999999999997
I believe I've had this happen with simple additions and simple multiplications as well.
I understand this is a consequence of the inability to hold floats properly in a computer, which is fine. However, Postgres, as far as I can tell, seems to handle these operations fine with the same numbers. Seems strange that Javascript doesn't seem to, unless I'm missing something.
Anyway, my current fix is to run it like this:
const fixFloatError = (n) => {
decimalDigitLength = n.match(/\.(\d+)/)[1].length;
return parseFloat(parseFloat(n).toFixed(decimalDigitLength - 1));
}
let n = String(46.85 * 0.1);
n = fixFloatError(n);
If you're wondering why I'm converting it to a string beforehand, it's because Javascript will automatically turn a float like 22.6139999999999997 into 22.614 as it enters the function (Which is correctly fixed! Regardless of whether you hardcoded that number into the variable or generated it by multiplication), and things like 4.6850000000000005 into 4.6850000000000005 (which hasn't changed). So to get a consistent function that works for both cases I'm passing in the float as a string to maintain its form.
Surely I'm missing something here and there's a simpler solution?
Just multiply the float by some factor of 10 to create a larger integer portion. Then round the remainder and divide back down by that same factor.
let sum = 46.85 * 0.1;
console.log(sum) // 4.6850000000000005
sum = (Math.round(sum * 1000000000)) / 1000000000;
console.log(sum); // 4.685

Sigmoid with Large Number in JavaScript

From what I understand you use a sigmoid function to reduce a number to the range of 0-1.
Using the function found in this library
function sigmoid(z) {
return 1 / (1 + Math.exp(-z));
}
This works for a numbers 1-36. Any number higher than this will just return 1.
sigmoid(36) -> 0.9999999999999998
sigmoid(37) -> 1
sigmoid(38) -> 1
sigmoid(9000) -> 1
How do you increase the range so this function can handle a number larger than 36.
A sigmoid function is any function which has certain properties which give it the characteristic s-shape. Your question has many answers. For example, any function whose definition looks like
const k = 2;
function sigmoid(z) {
return 1 / (1 + Math.exp(-z/k));
}
will fit the bill. The larger the k, the larger the useful domain.
A Sigmoid Function doesn't have bounds, that means it accept from infinitely small to infinitely large values.
Javascript, on the other hand, will round numbers (IEEE).
Anyway, what you can do is reescale your input before passing it to the formula.
Another option is tinker with the formula values, most notably the z value.

Simple floating point maths in JavaScript

I need to do some basic floating point math stuff (adding and multiplying money) for a website UI. I know that Javascript floats aren't accurate because of how they're stored, but I also know that somehow, it's possible to get the level of accuracy I require. I know this because Google's calculator can do it (type "calculator" into the Goog)..
Anyway, I don't want to have to send my little numbers back to the server and have to wait for a response, so I'm trying to use a library called BigNumbers.js, but I can't figure out how to make it spit out numbers (or strings) no matter what I call, it returns a BigNumber object.
Here's my test code: JSFiddle
floats = [145, 1.44, 1.3];
sum = new BigNumber(0);
for(i=0; i<floats.length; i++){
sum = sum.times(floats[i]);
}
// sum = sum.toDigits(); //returns object
// sum = sum.toString(); //returns 0
console.log(sum); // expecting 271.44, getting object
How can I achieve the expected result? If there's a better library to use, that would be an acceptable answer as well.
Thank you.
You'll want to initialize sum to 1 instead of 0 (and maybe change its name to product), and then call .toString() when you pass it to console.log():
console.log(sum.toString());
edit — also, as pointed out in a comment, you should set the number of decimal places (to 2, probably) and also set the rounding mode. You can do that via the BigNumber.config() call.
You can go just fine with the JavaScript floating values and Math.round(..) method used to round cents:
var floats = [145, 1.44, 1.3];
sum = 1;
for (i=0; i<floats.length; i++){
sum = Math.round(sum * floats[i] * 100)/100;
}
console.log(sum.toFixed(2)); // expecting 271.44

Javascript Infinity

Division by 0 gives this special value:
3/0 output:Infinity
You can’t play positive and negative infinity against each other:
Infinity - Infinity output:NaN (Why?)
It also turns out that “beyond infinity” is still infinity:
Infinity + Infinity output:Infinity(this is accepted)
5 * Infinity
Infinity(this is also accepted)
so why infinity-infinity evalutes to NaN?It should be infinity isn't it?Also i wanted to know why cant object be converted to primitive values?Sorry for posting two question at a time ,as this is the last question i can post.See here:
var obj = {
valueOf: function () {
console.log("valueOf");
return {}; // not a primitive
},
toString: function () {
console.log("toString");
return {}; // not a primitive
}
}
Number(obj) //TypeError: Cannot convert object to primitive values
That's how ∞ works in mathematics. Infinity itself is not a number, it is a concept. The general idea is that
∞ + x = ∞ ∀ x
∞ is, obviously, infinitely big. If you subtract an infinitely big thing from another infinitely big thing, you can't define what you have left. If the first infinity is bigger, you'll get a negative result, but if it's smaller then the result will be positive (basic rule of subtraction), but since both are infinitely big you have no way of knowing which is bigger (unless more information is given, such as the context leading to these infinities*). Therefore, as far as the computer is concerned, ∞ - ∞ is mathematically undefined, or Not a Number.
* Example: Let x = the sum of all positive integers, and y = the sum of each positive integer doubled. In this case, we can say that y > x, even though both are infinity.
Because it's an indeterminate form, so it's not infinity. NaN reflects this the best way possible.
http://en.wikipedia.org/wiki/Indeterminate_form
Related question:
https://math.stackexchange.com/questions/60766/what-is-the-result-of-infinity-minus-infinity
var A = 1/0
var B = 2 * A
var c = B - A
Note that even though B = 2 * A, still A = B (2 * infinity is still infinity, so they are both infinity), so what do you expect C to be? infinity or 0?
Infinity is not really a number, mathematically speaking. Though IsNaN(1/0) = false.

Linearly scaling a number in a certain range to a new range

I've made a scaling function that takes numbers in an interval [oldMin,oldMax] and scales them linearly to the range [newMin,newMax] . It does not seem to work when using negative values.
function linearScaling(oldMin, oldMax, newMin, newMax, oldValue){
var newValue;
if(oldMin !== oldMax && newMin !== newMax){
newValue = parseFloat((((oldValue - oldMin) * (newMax - newMin)) / (oldMax - oldMin)) + newMin);
newValue = newValue.toFixed(2);
}
else{
newValue = error;
}
return newValue;
}
This function seems to work when scaling a value from 0 -> 32761 to the range range 0 -> 10. However it does not seem to give the correct output when given a new negative range i.e. -10 -> 10
I have done my best to find an answer on this site. However the person who asked the question didn't mention what he ended up doing to fix it. That question says it could have something to do with mixed up data types, but i converted everything to a float did I miss anything?
Now that you showed how you call your function, I can reproduce your problem - namely that quoted numbers that should map to the negative domain don't.
It seems to be due to the fact that Javascript is very loose about the difference between a number and a string - and if it's not sure what to do about two numbers (because one of them appears to be a string), it assumes you want concatenation rather than addition. In other words - by passing the newMin value as '-10' rather than -10 you confused JS.
As a simple example,
document.write('1' + '-2');
produces
1-2
However,
document.write(1*'1' + 1*'-2');
results in
-1
The expression you had included a "possible concatenation" where it added oldMin:
newValue = (((oldValue - oldMin) * (newMax - newMin)) / (oldMax - oldMin)) + newMin;
With newMin set to '-10', you might get newValue to look like 6-10 instead of -4, to give an example. When you then did a parseFloat, Javascript would quietly work its way through the string up to the minus sign, and return 6 instead of evaluating the expression and coming up with -4.
To clear up the confusion, multiply each parameter by 1 to make it "a genuine number":
oldMin = 1*oldMin;
oldMax = 1*oldMax;
newMin = 1*newMin;
newMax = 1*newMax;
oldValue = 1*oldValue;
When you add these lines at the start of your function declaration, everything works smoothly - regardless of how you call the function. Or just call it with the newMin value not in quotes - it is the one causing the trouble in this particular instance.
document.writeln('the new code called with parameter = 100:\n');
document.writeln(linearScaling('0', '32761', '-10', '10', 100)+'<br>');
document.writeln('the old code called with parameter = 100:\n');
document.writeln(linearScalingOld('0.0', '32761.0', '-10.0', '10.0', '100.0')+'<br>');
document.writeln('the old code called with unquoted parameters:\n');
document.writeln(linearScalingOld(0.0, 32761.0, -10.0, 10.0, 100.0)+'<br>');
results in the following:
the new code called with parameter = 100: -9.94
the old code called with parameter = 100: 0.06
the old code called with unquoted parameters: -9.94
I hope this illustrates the cause of the problem, and the solution.

Categories

Resources