I have this function.
Calculations.add(this, //CONTEXT
function () { //CALUCATE
this.position.x += (this.movementSpeed.x / 10);
},
function () { //HAVE CALCULATED
return (this.position.x === (tempX + this.movementSpeed.x));
}
);
I have run the result, but sometime the result is wrong. Cause I know that if it calculate 10 times, then the the HAVE CALCULATED whould be true.
But sometimes it never is... And that kills my app.
Let us say that the result should give 138, then after the calculation it give me 138.000000000006 which is not 138 and the HAVE CALCULATED is false..
How can I manage this= I can't use round, because it should be able to return 138.5, if the end-result is that.
Hope you understand my question.
Always floating point = comparisons should be done like this:
Math.abs( a - b ) < 1e-6
where 1e-6 is an arbitrary error threshold that you determine in advance
You should never compare floating point values this way. (The link from Waleed Khan in the comments gives a good explanation why this happens)
Instead you can do something like this to check equality of a and b:
if (a < b + 0.0001 && a > b - 0.0001) {
// values are "equal"
}
You could round to a certain number of digits, from another answer on SO use something like this:
function roundNumber(n, digits) {
var multiple = Math.pow(10, digits);
return Math.round(n * multiple) / multiple;;
}
This way you do not require fancy comparisons.
Related
how do I divide 12330 by 100 to give me 123.30
trying 12330/100 in JS gives 123.3 but I want the 0 at the end to stay
also, I need the function to not give .00
so 100/100 should give 1 and not 1.00
tried using .toFixed(2). but it only solved the first case and not the second.
use toFixed https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/toFixed
console.log(
(12330 / 100).toFixed(2)
);
here 2 means, the precision of float
Attention: also when the number isn't a float it will do number.00 (in the most of cases this is good behavior)
but if isn't good for you, see the next edited answer...
new edited answer
if the .00 gives you problems, use this:
% operator, https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Remainder
function convert(value) {
if (value % 1 === 0) {
return value;
}
return value.toFixed(2);
}
// tests
console.log(convert(12330 / 100)); // should return value with toFixed
console.log(convert(100 / 100)); // should return 1 (no toFixed)
console.log(convert(100 / 10)); // should return 10 (no toFixed)
for understanding more about this checking (value % 1 === 0) you can see this StackOverflow question on this topic How do I check that a number is float or integer?
There are several ways you could do this. #Laaouatni's answer is one example (and if you go with that one, make sure to mark that as the answer).
I'll give two others here.
Using string.prototype.slice:
const intermediate = value.toFixed(2);
const result = intermediate.endsWith(".00") ? intermediate.slice(0, -1) : intermediate;
Using string.prototype.replace:
const result = value.toFixed(2).replace(".00", ".0");
I need to create a sequence of numbers using while or for that consists of the sum of the symbols of the number.
For example, I have a sequence from 1 to 10. In console (if I've already written a code) will go just 1, 2,3,4,5,6,7,8,9,1. If I take it from 30 to 40 in the console would be 3,4,5,6,7,8,9,10,11,12,13.
I need to create a code that displays a sum that goes from 1 to 100. I don't know how to do it but in console I need to see:
1
2
3
4
5
5
6
7
8
9
1
2
3
4
etc.
I've got some code but I got only NaN. I don't know why. Could you explain this to me?
for (let i = '1'; i <= 99; i++) {
let a = Number(i[0]);
let b = Number(i[1])
let b1 = Boolean(b)
if (b1 == false) {
console.log ('b false', a)
}
else {
console.log ('b true', a + b)
}
}
I hope you get what I was speaking about.
Although I like the accepted answer however from question I gather you were asking something else, that is;
30 become 3+0=3
31 become 3+1=4
37 becomes 3+7=10
Why are we checking for boolean is beyond the scope of the question
Here is simple snnipet does exactly what you ask for
for (let i = 30; i <= 40; i++) {
let x=i.toString();
console.log( 'numbers from ' +i + ' are added together to become '+ (Number(x[0])+Number((x[1])||0)))
}
what er are doing is exactly what Maskin stated begin with for loop then in each increment convert it to string so we can split it, this takes care of NAN issue.
you don't need to call to string just do it once as in let x then simply call the split as x[0] and so on.
within second number we have created a self computation (x[1])||0) that is if there is second value if not then zero. following would work like charm
for (let i = 1; i <= 10; i++) {
let x=i.toString();
console.log( 'numbers from ' +i + ' are added together to become '+ (Number(x[0])+Number((x[1])||0)))
}
Did you observe what happens to ten
here is my real question and solution what if you Don't know the length of the digits in number or for what ever reason you are to go about staring from 100 on wards. We need some form of AI into the code
for (let i = 110; i <= 120; i++) {
let x= Array.from(String(i), Number);
console.log(
x.reduce(function(a, b){ return a + b;})
);
};
You simply make an array with Array.from function then use simple Array.reduce function to run custom functions that adds up all the values as sum, finally run that in console.
Nice, simple and AI
You got NaN because of "i[0]". You need to add toString() call.
for (let i = '1'; i <= 99; i++) {
let a = Number(i.toString()[0]);
let b = Number(i.toString()[1])
let b1 = Boolean(b)
if (b1 == false) {
console.log('b false', a)
} else {
console.log('b true', a + b)
}
}
So the way a for loop works is that you declare a variable to loop, then state the loop condition and then you ask what happens at the end of the loop, normally you increment (which means take the variable and add one to it).
When you say let i = '1', what you're actually doing, is creating a new string, which when you ask for i[0], it gives you the first character in the string.
You should look up the modulo operator. You want to add the number of units, which you can get by dividing by 10 and then casting to an int, to the number in the tens, which you get with the modulo.
As an aside, when you ask a question on StackOverflow, you should ask in a way that means people who have similar questions to you can find their answers.
I want to design a function that would return true most of the time but theoretically could return false.
So far, all I've come up with is (with comments added, due to some confusion):
function true(seed) {
// Poop, you can't `seed` Math.random()!
return Math.random() !== Math.random();
}
// but if I had that seed, and Math.random() was seedable,
// I could make this function return false.
However, this runs into a few limitations.
Math.random() by implementation is not seedable therefore calling a random number generator twice in a row (with no other entropy) will never return the same number twice.
Math.random() will return a value between 0.0000000000000000 and 0.9999999999999999, which is sixteen digits of precision. So according to Binomial Distribution the probability of true not being true is (1/9999999999999999)^2. Or 1.0 e-32.
What I am trying to build is something that would only return false in the probability of 1/some integer that grows larger and larger. This is purely a thought experiment. There is no constraint on space and time, although if your answer has considered that as well then that's a bonus.
EDIT: I guess, here is another way to ask this question.
Take a look at this Plunker. https://plnkr.co/edit/C8lTSy1fWrbXRCR9i1zY?p=preview
<script src="//cdnjs.cloudflare.com/ajax/libs/seedrandom/2.4.0/seedrandom.min.js"></script>
function f(seed) {
Math.seedrandom(seed);
return 0.7781282080210712 === Math.random();
}
console.log(f()); // Behaves as expected
console.log(f(Math.random())); // Pretty much everything returns false
function t(seed) {
Math.seedrandom(seed);
return 0.7781282080210712 !== Math.random();
}
console.log(t()); // Returns true.
console.log(t(Math.random())); // All is well with the world.
// But, if you have the right seed!
console.log(f('Udia')); // Holy shit, this returned true!
console.log(t('Udia')); // Holy shit, this returned false!
What is the most interesting way to write a function that returns true? It can run forever, take up as much space as possible, etc. But it must return true. (and have the smallest probability of returning false.)
Fill buffers of whatever size you want with random data, and compare them.
Untested, but try something like this:
const length = 32768;
let values = [
new Uint8Array(length),
new Uint8Array(length)
];
window.crypto.getRandomValues(values[0]);
window.crypto.getRandomValues(values[1]);
let i;
for (i=0; i<length; i++) {
if (values[0][i] === values[1][i]) {
break;
}
}
if (i === length-1) {
console.log('The (nearly) impossible has occurred!');
}
Since Math.random() will not yield the same number twice in a row, do this:
var improbabilityDrive = Math.random();
var discard = Math.random();
function true() {
return Math.random() !== improbabilityDrive;
}
Or, if you don't want global variables, just discard the next few results:
function true() {
var improbabilityDrive = Math.random();
var discard = Math.random();
discard = Math.random();
discard = Math.random();
//... more discards, if necessary
return Math.random() !== improbabilityDrive;
}
Edit: Drop Probability each time it's called
OP asked if it's possible to make it less and less likely to return (false, I think is what you meant?)
var hitsRequired = 0.0;
var improbabilityDrive = Math.random();
//Increasingly Lower Chance of 'false' With Each Call
function superTrue() {
hitsRequired += 0.1; //Set Growth Factor here (algebraic: +=, geometric: *=)
for (int i = 0; i < hitsRequired; i++) {
if (trueish()) return true;
}
return false;
}
//Same Theoretically Low Chance of 'false' Each Call
function trueish() {
var discard = Math.random();
discard = Math.random();
discard = Math.random();
//... more discards, if necessary
return Math.random() !== improbabilityDrive;
}
Edit 2: Insanely Low Probability
After re-reading your question, I think you're after the most-low probability you can get. This is far, far, below reason:
//Increasingly Lower Chance of 'false' With Each Call
function superDuperTrue() {
for (int i = 0; i <= 9007199254740992; i++) {
if (trueish()) return true;
}
return false;
}
The probability that this produced a false is:
(1/4503599627370496) ^ 9007199254740992 = 10 ^ ( - 10 ^ 17.15)
That would, by almost any reasonable measure, be such an absurdly low probability that it could just be assumed to never happen. I'd be surprised if it would return a single false if tried a trillion times per second until the heat death of the universe - putting that into wolfram alpha didn't even drop the number of the probability (1 trillion * 10^100 years until heat death of the universe * 3,156,000 seconds / year * that probability = that probability, subject to 14 decimal places of accuracy).
Basically, it would never happen, but it theoretically possible.
At 1,000,000,000,000 tries per second:
For n=0, 38 minutes would yeild a 50% chance of a single false.
For n=1, 325 billion years would yeild a 50% chance of a single false.
For n=2, 1500000000000000000000000000 years (1.5 * 10^17), or 110000000000000000 times the age of the universe would yeild a 50% chance of a single false.
... Increase n up to the 9007199254740992, above, to make it as implausible as you desire.
One way to tune this yourself would be to repeat the process. E.g.:
function true() {
var n = 1000;
var first = Math.random();
for (var i = 0; i < n; i++) {
if (Math.random() !== first) {
return true;
}
}
return false;
}
Now your code in the original question is just the special case of n = 2, and the odds of returning false are 1/9999999999999999^(n-1).
You can get an adjustable probability with
return Math.random() > 0.00000001; // play with that number
Numbers in js are IEEE 754 doubles. So Math.random() returns values between 0 and 0.999999999999999888977697537484.
To calculate the number of possible unique return values is simpler. IEEE 754 doubles have 52 bit mantissa. So the number of possible return values is: 2^52 or 4503599627370496.
Which means, the smallest ever possibility is 1/4503599627370496.
Now, if you REALLY want the smallest possible probability just compare the output of Math.random() to one of it's unique outputs. Note that since not all decimal numbers are representable as floats you should use a number that has an exact representation. For example 0.5.
So, the smallest possible probability is:
Math.random() === 0.5
Which has exactly 1 in 4503599627370496 chance of happening. The theoretical smallest probability.
So if you want your function to return true most of the time you should do:
function true() {
return Math.random() !== 0.5;
}
Warning
Note that this may not be what you want. It is very, very rare for this smallest probability to happen. As a test I ran the following code:
for (i=0;i<1000000000;i++){ // loop to one billion
foo=Math.random();
if (foo === 0.5) {
console.log('>>> ' + i)
}
}
I ran the above loop five times and only observed the console.log() output once. In theory you should expect to see the event happen once every 4.5 quadrillion times you call that function. Which means that if you call the function once each second it will see the it return false roughly once every 143 million years.
Normally we expect 0.1+0.2===0.3 to be true. But it is not what javascript will result. As javascript displays decimal floating point number but stores binary floating point number internally. So this returns false.
If we use chrome developer tool console, we'll get the following result:
0.1+0.2;//0.30000000000000004
0.1+1-1;//0.10000000000000009
0.1 + 0.2 === 0.3 ;// returns false but we expect to be true.
0.1+1-1===0.1;//returns false
Due to rounding errors, as a best practice we should not compare non-integers directly. Instead, take an upper bound for rounding errors into consideration. Such an upper bound is called a machine epsilon.
And here is the epsilon method:
var eps = Math.pow(2,-53);
function checkEq(x,y){
return Math.abs(x - y) < eps;
}
Now, if we check it returns true.
checkEq(0.1+0.2,0.3);// returns true
checkEq(0.1+1-1,0.1);//returns true
It's okay and fine. But if I check this:
checkEq(0.3+0.6,0.9);// returns false
Which is not okay and not as what we expect.
So, how should we do to return the correct results?
What I've tried to solve this is like this:
var lx,ly,lxi,lyi;
function isFloating(x){
return x.toString().indexOf('.');
}
function checkEq(x,y){
lxi = x.toString().length - x.toString().indexOf('.') - 1;
lyi = y.toString().length - y.toString().indexOf('.') - 1;
lx = isFloating(x) > -1 ? lxi : 0;
ly = isFloating(y) > -1 ? lyi : 0;
return x.toFixed(lx) - y.toFixed(ly)===0;
}
Now, fixed. And it results fine if I check like this:
checkEq(0.3,0.3); //returns true
But the following returns false
checkEq(0.3+0.6,0.9)
As here first it's value is stored in binaray floating point number and then returning decimal floating point number after calculating.
So now, how can I set toFixed() method for each input like in checkEq(0.3+0.6,0.9) 0.3.toFixed(lx) and 0.6.toFixed(lx) and then only add:
var lx,ly,lxi,lyi;
function isFloating(x){
return x.toString().indexOf('.');
}
function checkEq(x,y){
x = x.toString().split(/\+ | \- | \/ | \ | \\ */);
y = x.toString().split(/\+ | \- | \/ | \ | \\*/);
for(var i=0;i<x.length,y.length;i++){
//here too I may be wrong...
lxi = x[i].toString().length - x[i].toString().indexOf('.') - 1;
lyi = y[i].toString().length - y[i].toString().indexOf('.') - 1;
// particularly after this would wrong...
lx = isFloating(x[i]) > -1 ? lxi : 0;
ly = isFloating(y[i]) > -1 ? lyi : 0;
//And, here I'm stucked too badly...
//take splitted operators to calculate:
//Ex- '0.3 + 1 - 1'
// Number('0.3').toFixed(1) + Number('1').toFixed(0) - Number('1').toFixed(0)
//But careful, we may not know how many input will be there....
}
//return x.toFixed(lx) - y.toFixed(ly)===0;
}
Other answers are also welcome but helping me with my code is greatly appreciated.
Perhaps you should try out some existing JS Math library such as bignumber.js, which supports arbitrary-precision arithmetic. Implementing everything from scratch will be rather time consuming and tedious.
Example
0.3+0.6 //0.8999999999999999
x = new BigNumber('0.3') // "0.3"
y = new BigNumber('0.6') // "0.6"
z = new BigNumber('0.9') // "0.9"
z.equals(x.plus(y)) // true
I think you should take a little larger value for epsilon.
You can also have a look at math.js: the comparison functions of math.js also check for near equality. Comparison is explained here:
http://mathjs.org/docs/datatypes/numbers.html#comparison
So you can do:
math.equal(0.1 + 0.2, 0.3); // true
math.equal(0.3 + 0.6, 0.9); // true
even better, math.js has support for bignumbers (see docs), so you can do:
math.equal(math.bignumber(0.1) + math.bignumber(0.2), math.bignumber(0.3);
or using the expression parser:
math.config({number: 'bignumber'});
math.eval('0.1 + 0.2'); // returns BigNumber 0.3, not 0.30000000000000004
math.eval('0.1 + 0.2 == 0.3'); // returns true
Discontinuous functions such as equality (but also floor and ceil) are badly affected by rounding errors, and taking an epsilon into account may work in some cases, but may also give an incorrect answer (e.g. abs(x-y) < eps may return true while the exact value of x and y are really different); you should do an error analysis to make sure that it is OK. There is no general way to solve the problem in floating point: this depends on your application. If your inputs are decimal numbers and you just use addition, subtraction and multiplication, a decimal floating-point arithmetic may be OK if the precision is large enough so that all your data can be represented exactly. You can also use a rational arithmetic, such as big-rational (not tried).
What's the best way to detect if a number, is between two other numbers? Is there already a function to do this in the Math object?
There is no specific function, but you can do it like this:
lowNumber < yourNumber && yourNumber < highNumber
Though the code solution is fairly obvious, if you're going to use it a lot, you may want to implement it on Number.prototype for convenience:
Number.prototype.inRange = function( a,b ) {
var n = +this;
return ( n > a && n < b );
};
So you'd use it like this:
(5).inRange( 3, 7 ); // true
Example: http://jsfiddle.net/dTHQ3/
Um if it is greater than one and less than the other.
var num1 = 3;
var num2 = 5;
var x = 4;
var isBetween = (num1 < x && num2 > x);
if ( yournumber < highNumber && yournumber > lowNumber ){
// do something
} else {
// do something else
}
The only optimized way to do this is to guess which is more likely: Is the number your checking more likely to be lower than the lower bound, or is it more likely to be higher than the upper bound?
With this in mind, you can take advantage of short circuiting by placing the more likely failure check first (if it fails that, it won't test the less likely criteria). This is the only way to optimize it.
Even this will save you the smallest amount of time that is most likely not going to be noticed. Perhaps if you were making this check millions of times, you might save a fraction of a second over the alternative.