Equation Creation for Magic Square style logic - javascript

I am trying to bring logic and programming to a currently manually driven process.
We take the weights of 16 different crushing hammers, organize them into sets of 4 based on how close each set's weight is to the others. We are looking to have less than 1 pound difference between all 4 sets. The weights are known but I cannot logically program a method to do this without pen and paper.
Example below.
Set A
Set B
Set C
Set D
39.1
40.1
42.0
41.5
40.05
41.0
40.05
38.90
41.2
42.1
41.3
43.1
38.5
43.60
42.1
41.5
Totals
158.85
166.80
165.45
165
As you can see in the first example, Sets C and D are close enough. Sets A and B are too far apart and need readjusting, but due to the severe difference, I would most likely have to rearrange all 4 sets to get with 1 pound difference for all 4 sets. Now mind you, this is all done on paper, and I am looking for a way to plug in all numbers and have it spit out the best configuration given the hammer weights, no more paper.
Set A
Set B
Set C
Set D
42.1
39.1
42.0
41.5
40.05
41.0
40.05
38.90
43.60
43.1
41.3
42.1
38.5
41.20
40.1
41.5
Totals
164.25
164.40
163.45
164
I could do this all day, splitting hairs to get as close as possible. The closer we get the weights, the less vibration we experience and our equipments last a lot longer. Anyone have any thoughts to accomplishing this?

I'm not a good math guy but I think in order to arrange ALL 4 SETS within 1lb tolerance, all 16 hammers should have some what consistent weight that is a bit apart from targeted weight.
The most simple way I can think of is to sort all of them in order and assign index 0~3 to each group until all 16 are assigned.

Related

How to make optimal decision in poker like single player card game?

I need some help to finish my app that helps to make optimal decision in card game.
Short game description:
Single player game - 1 player one deck only for himself
24 card deck, card values from 1 to 8, 3 colors of each (Red,Yellow, Blue).
At start of the game deck gets randomly shuffled then player pulls 5 cards (19 left in deck) to his hand and from this point he's allowed to:
Put away one card (this card is not in game anylonger) and take one card from top of shuffled deck for its place.
or
Can put away 3 cards that form a one of possible point granting combinations, then takes 3 cards to his hand.
Game ends when cards in deck ends and player have no more possible scoring combinations.
Goal of the game is to obtain as much points as possible.
List of variants: (called later combinations)
Three of a kind (8 possible in total):
111 - 20 points
222 - 30 points
333 - 40 points
444 - 50 points
555 - 60 points
666 - 70 points
777 - 80 points
888 - 90 points
Straight (card can have any color, 144 possible in total):
123 - 10 points
234 - 20 points
345 - 30 points
456 - 40 points
567 - 50 points
678 - 60 points
Straight flush (cards have to be in the same color, RRR/YYY/BBB, 18 possible in total):
123 - 50 points
234 - 60 points
345 - 70 points
456 - 80 points
567 - 90 points
678 - 100 points
I created the whole game logic in JS but suffer to make algoritm that will help me:
which card to put away from hand? - to get highest chance for combination
or:
which card to put away from hand?- to lose least amount of possible combinations
I don't really know which approach i should take. Or should i combine them somehow and let the formula decide? Currently I'm reading about risk reward ratio maybe thats the way to go?
I have to remember that with each card less i have less and less possible combinations (starting with 170), but sometimes its worth in later stage of the game to sacrifice cheaper combination for higher point score.
My past ways of thinking:
Calculate the probability of each possible remaining combination when putting away each card from hand.(i.e. 5 times) and decide which one is the worst.
Calculate the probability of each possible remaining combination for every 2 card subset of 5 card set in hand (so 10 times looking for 1 card to fill the combination) and decide which card from hand is the worst.
Calculate the probability of each possible remaining combination left in deck (not counting cards in hand there). 3+ moves in advance
And somehow add those three together to get optimal/close to optimal solution?
Should i add potential loss to my calculations and how to form them? If i take weak combination and later in the game i can not obtain higher ones since they are mutually exclusive.

What is this (128 + 127 *) part of these formulas for riffwave.js?

I'm trying to do some javascript music synth in browser, and I came across riffwave.js. From this question here, the answer gives a hint on how one could use riffwave.js.
I've worked through it a bit to figure out some things like multiple tones, and I understand nearly all of it, but I don't know why there's the 128 + 127 * in there.
It also shows up here in this demo page.
Can anyone offer an explination or tell me how I can use that part to modify the program?
Thanks.
The examples that you referenced show an equation in the form:
A=128 + 127 * Sin(...)
Since the Sin function can vary from -1 to +1, the result of the above equation is 1-255. So, adding the constant 128 and multiplying by the coefficient 127, this is basically a sin function whose output varies between 1 and 255, which is convenient because it can be stored using 8 bits (or 1 byte).
Like mti2935 said, it's a convenient way of mapping a number that can vary from -1 to 1 to an integer varying from 1 to 255, which fits nicely in a byte. In particular, it seems that riffwave internally represents sound in 8-bit .wav format, so this converts floating-point numbers into the 8-bit integer format required to actually play the sound.

Parameters of gaussian distribution, which is generated using central limit theorem

In a software I am working on (sensor simulation), I needed to generate normally distributed noise for simulated sensor signals. I used the central limit theorem. I generated 20 random numbers and built an average out of them to approximate the gaussian distribution.
So I took the "measured" signal and generated 20 numbers from -noiseMax to +noiseMax and averaged them. I added the result to the signal to have noise.
Now, for my university, I have to describe this Gaussian distribution by its mean and variance. Ok, mean will be 0 but I have absolutely no idea how to convert noiseMax in my program into the variance. Googling haven't helped much.
I was not sure if SO is the right SE platform for this question. Sorry if it isn't.
OK, so the central limit theorem says that the average of a sufficiently large number of uniformally distributed variables will be normal. In the statistics classes I have taken, 30 is usually used as the cutoff, so you might want to increase your simulation's "sample size".
However, you can find the Standard Deviation of your average as follows regardless of "sample size".
The standard deviation of your uniform variable is (b-a)/sqrt(12)== noiseMax/sqrt(3).
Variances add when you add variables, so the standard deviation of n the sum of n of these variables is sqrt(n*(noiseMax/sqrt(3))*(noiseMax/sqrt(3)))==noiseMax*sqrt(n/3).
Dividing by n to get the average gives you a final standard deviation of noiseMax/sqrt(3*n). In your case, sigma = noiseMax * 0.12909944487.
From theoretical POV this is known as Irwin-Hall distribution
Simplest to produce N(0,1) is sum of 12 uniform RN minus 6, no need for scaling
In general, to see how variance is computed, take a look at
http://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution
I would also recommend to look at the Table of Numerical Values in the following article: http://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule.
For example, if one to use sum of 12 uniform numbers (minus 6), then min value would be at -6 (exactly -6*sigma) and max value would be at +6 (exactly +6*sigma). Looking at the table, what would be expected frequency outside the range? Answer: 1/506797346. Thus, one out of ~half a billion events shall land outside the +-6sigma, but Irwill-Hall(12) rng will miss it. Thus, you could judge if it is ok or not for your particular simulation

javascript coderbyte CoinDeterminer [duplicate]

I've been reviewing some dynamic programming problems, and I have had hard time wrapping my head around some code in regards to finding the smallest number of coins to make change.
Say we have coins worth 25, 10, and 1, and we are making change for 30. Greedy would return 25 and 5(1) while the optimal solution would return 3(10). Here is the code from the book on this problem:
def dpMakeChange(coinValueList,change,minCoins):
for cents in range(change+1):
coinCount = cents
for j in [c for c in coinValueList if c <= cents]:
if minCoins[cents-j] + 1 < coinCount:
coinCount = minCoins[cents-j]+1
minCoins[cents] = coinCount
return minCoins[change]
If anyone could help me wrap my head around this code (line 4 is where I start to get confused), that would be great. Thanks!
It looks to me like the code is solving the problem for every cent value up until target cent value. Given a target value v and a set of coins C, you know that the optimal coin selection S has to be of the form union(S', c), where c is some coin from C and S' is the optimal solution for v - value(c) (excuse my notation). So the problem has optimal substructure. The dynamic programming approach is to solve every possible subproblem. It takes cents * size(C) steps, as opposed to something that blows up much more quickly if you just try to brute force the direct solution.
def dpMakeChange(coinValueList,change,minCoins):
# Solve the problem for each number of cents less than the target
for cents in range(change+1):
# At worst, it takes all pennies, so make that the base solution
coinCount = cents
# Try all coin values less than the current number of cents
for j in [c for c in coinValueList if c <= cents]:
# See if a solution to current number of cents minus the value
# of the current coin, with one more coin added is the best
# solution so far
if minCoins[cents-j] + 1 < coinCount:
coinCount = minCoins[cents-j]+1
# Memoize the solution for the current number of cents
minCoins[cents] = coinCount
# By the time we're here, we've built the solution to the overall problem,
# so return it
return minCoins[change]
Here is a way to think about the coin changing problem that may be useful, if you are comfortable with graph theory.
Assume you have a graph defined in the following way:
There is one node for every unit of money (e.g., pennies) from 0 up to the value you are interested in (e.g., 39 cents, or whatever.)
There is one arc between any two nodes separated by exactly the value of a coin you are allowed to use (e.g., a node between 34 cents and 29 cents if you are allowed to use nickels.)
Now you can think of the coin changing problem as a shortest path problem from your value of interest down to zero, because the number of coins will be exactly the same as the number of arcs in your path.
The algorithm doesn't use a graph theory terminology, but it is doing basically the same thing: The outer loop is ranging over all the "cents" (or nodes, in the graph theory framework) and the inner loop is ranging over all the arcs (the values in coinValueList) from the present arc to the next arc. All together, they are looking for the shortest path from zero up to your value of interest. (Value down to zero, zero up to value, doesn't matter. Traditionally we search downward to zero, though.)
I only really started to understand dynamic programming when I realized many problems could be cast as graph problems. (Be careful, though-- not all of them can. Some are hypergraphs, and some are probably not even that. But it helped me a lot.)
I think the fourth line is confusing because while Python can select/filter in a list comprehension (transform(x) for x in iterable if condition(x)), it can't do the same in its standard for x in iterable: expression.
So one (cheesy imo) way people get around that is to weld the two together. They create a list comprehension which actually does no tranformation (thus the c for c in coinValueList) just so they can add the if c <= cents clause on. And then use that as the iterable for a standard for x in iterable: expression. I suspect that's where some of your confusion is coming from.
An alternate way to have written that line might have been:
...
for eachCoinValue in filter(lambda x: x <= cents, coinValueList):
...
Or even more clearly, with an "intention revealing variable" would be:
...
smallEnoughCoins = filter(lambda each: each <= cents)
for each in smallEnoughCoins:
...

Hash 32bit int to 16bit int?

What are some simple ways to hash a 32-bit integer (e.g. IP address, e.g. Unix time_t, etc.) down to a 16-bit integer?
E.g. hash_32b_to_16b(0x12345678) might return 0xABCD.
Let's start with this as a horrible but functional example solution:
function hash_32b_to_16b(val32b) {
return val32b % 0xffff;
}
Question is specifically about JavaScript, but feel free to add any language-neutral solutions, preferably without using library functions.
The context for this question is generating unique IDs (e.g. a 64-bit ID might be composed of several 16-bit hashes of various 32-bit values). Avoiding collisions is important.
Simple = good. Wacky+obfuscated = amusing.
The key to maximizing the preservation of entropy of some original 32-bit 'signal' is to ensure that each of the 32 input bits has an independent and equal ability to alter the value of the 16-bit output word.
Since the OP is requesting a bit-size which is exactly half of the original, the simplest way to satisfy this criteria is to xor the upper and lower halves, as others have mentioned. Using xor is optimal because—as is obvious by the definition of xor—independently flipping any one of the 32 input bits is guaranteed to change the value of the 16-bit output.
The problem becomes more interesting when you need further reduction beyond just half-the-size, say from a 32-bit input to, let's say, a 2-bit output. Remember, the goal is to preserve as much entropy from the source as possible, so solutions which involve naively masking off the two lowest bits with (i & 3) are generally heading in the wrong direction; doing that guarantees that there's no way for any bits except the unmasked bits to affect the result, and that generally means there's an arbitrary, possibly valuable part of the runtime signal which is being summarily discarded without principle.
Following from the earlier paragraph, you could of course iterate with xor three additional times to produce a 2-bit output with the desired property of being equally-influenced by each/any of the input bits. That solution is still optimally correct of course, but involves looping or multiple unrolled operations which, as it turns out, aren't necessary!
Fortunately, there is a nice technique of only two operations which gives the same optimal result for this situation. As with xor, it not only ensures that, for any given 32-bit value, twiddling any input bit will result in a change to the 2-bit output, but also that, given a uniform distribution of input values, the distribution of 2-bit output values will also be perfectly uniform. In the current example, the method divides the 4,294,967,296 possible input values into exactly 1,073,741,824 each of the four possible 2-bit hash results { 0, 1, 2, 3 }.
The method I mention here uses specific magic values that I discovered via exhaustive search, and which don't seem to be discussed very much elsewhere on the internet, at least for the particular use under discussion here (i.e., ensuring a uniform hash distribution that's maximally entropy-preserving). Curiously, according to this same exhaustive search, the magic values are in fact unique, meaning that for each of target bit-widths { 16, 8, 4, 2 }, the magic value I show below is the only value that, when used as I show here, satisfies the perfect hashing criteria outlined above.
Without further ado, the unique and mathematically optimal procedure for hashing 32-bits to n = { 16, 8, 4, 2 } is to multiply by the magic value corresponding to n (unsigned, discarding overflow), and then take the n highest bits of the result. To isolate those result bits as a hash value in the range [0 ... (2ⁿ - 1)], simply right-shift (unsigned!) the multiplication result by 32 - n bits.
The "magic" values, and C-like expression syntax are as follows:
Method
Maximum-entropy-preserving hash for reducing 32 bits to. . .
Target Bits Multiplier Right Shift Expression [1, 2]
----------- ------------ ----------- -----------------------
16 0x80008001 16 (i * 0x80008001) >> 16
8 0x80808081 24 (i * 0x80808081) >> 24
4 0x88888889 28 (i * 0x88888889) >> 28
2 0xAAAAAAAB 30 (i * 0xAAAAAAAB) >> 30
Maximum-entropy-preserving hash for reducing 64 bits to. . .
Target Bits Multiplier Right Shift Expression [1, 2]
----------- ------------------ ----------- -------------------------------
32 0x8000000080000001 32 (i * 0x8000000080000001) >> 32
16 0x8000800080008001 48 (i * 0x8000800080008001) >> 48
8 0x8080808080808081 56 (i * 0x8080808080808081) >> 56
4 0x8888888888888889 60 (i * 0x8888888888888889) >> 60
2 0xAAAAAAAAAAAAAAAB 62 (i * 0xAAAAAAAAAAAAAAAB) >> 62
Notes:
Use unsigned multiply and discard any overflow (64-bit multiply is not needed).
If isolating the result using right-shift (as shown), be sure to use an unsigned shift operation.
Further discussion
I find this all this quite cool. In practical terms, the key information-theoretical requirement is the guar­antee that, for any m-bit input value and its corresponding n-bit hash value result, flipping any one of the m source bits always causes some change in the n-bit result value. Now al­though there are 2ⁿ possible result values in total, one of them is already "in-use" (by the result itself) since "switching" to that one from any other result would be no change at all. This leaves 2ⁿ - 1 result values that are eligible to be used by the entire set of m input values flipped by a single bit.
Let's consider an example; in fact, to show how this technique might seem to border on spooky or downright magical, we'll consider the more extreme case where m = 64 and n = 2. With 2 output bits there are four possible result values, { 0, 1, 2, 3 }. Assuming an arbitrary 64-bit input value 0x7521d9318fbdf523, we obtain its 2-bit hash value of 1:
(0x7521d9318fbdf523 * 0xAAAAAAAAAAAAAAAB) >> 62 // result --> '1'
So the result is 1 and the claim is that no value in the set of 64 values where a single-bit of 0x7521d9318fbdf523 is toggled may have that same result value. That is, none of those 64 other results can use value 1 and all must instead use either 0, 2, or 3. So in this example it seems like every one of the 2⁶⁴ input values—to the exclusion of 64 other input values—will selfishly hog one-quarter of the output space for itself. When you consider the sheer magnitude of these interacting constraints, can a simultaneously satisfying solution overall even exist?
Well sure enough, to show that (exactly?) one does, here are the hash result values, listed in order, for inputs that flipping a single bit of 0x7521d9318fbdf523 (one at a time), from MSB (position 63) down to LSB (0).
3 2 0 3 3 3 3 3 3 0 0 0 3 0 3 3 0 3 3 3 0 0 3 3 3 0 0 3 3 0 3 3 // continued…
0 0 3 0 0 3 0 3 0 0 0 3 0 3 3 3 0 3 0 3 3 3 3 3 3 0 0 0 3 0 0 3 // notice: no '1' values
As you can see, there are no 1 values, which entails that every bit in the source "as-is" must be contributing to influence the result (or, if you prefer, the de facto state of each-and-every bit in 0x7521d9318fbdf523 is essential to keeping the entire overall result from being "not-1"). Because no matter what single-bit change you make to the 64-bit input, the 2-bit result value will no longer be 1.
Keep in mind that the "missing-value" table shown above was dumped from the analysis of just the one randomly-chosen example value 0x7521d9318fbdf523; every other possible input value has a similar table of its own, each one eerily missing its owner's actual result value while yet somehow being globally consistent across its set-membership. This property essentially corresponds to maximally preserving the available entropy during the (inherently lossy) bit-width reduction task.
So we see that every one of the 2⁶⁴ possible source values independently imposes, on exactly 64 other source values, the constraint of excluding one of the possible result values. What defies my intuition about this is that there are untold quadrillions of these 64-member sets, each of whose members also belongs to 63 other, seemingly unrelated bit-twiddling sets. Yet somehow despite this most confounding puzzle of interwoven constraints, it is nevertheless trivial to exploit the one (I surmise) resolution which simultaneously satisfies them all exactly.
All this seems related to something you may have noticed in the tables above: namely, I don't see any obvious way to extend the technique to the case of compressing down to a 1-bit result. In this case, there are only two possible result values { 0, 1 }, so if any/every given (e.g.) 64-bit input value still summarily excludes its own result from being the result for all 64 of its single-bit-flip neighbors, then that now essentially imposes the other, only remaining value on those 64. The math breakdown we see in the table seems to be signalling that a simultaneous result under such conditions is a bridge too far.
In other words, the special 'information-preserving' characteristic of xor (that is, its luxuriously reliable guarantee that, as opposed to and, or, etc., it c̲a̲n̲ and w̲i̲l̲l̲ always change a bit) not surprisingly exacts a certain cost, namely, a fiercely non-negotiable demand for a certain amount of elbow room—at least 2 bits—to work with.
I think this is the best you're going to get. You could compress the code to a single line but the var's are there for now as documentation:
function hash_32b_to_16b(val32b) {
var rightBits = val32b & 0xffff; // Left-most 16 bits
var leftBits = val32b & 0xffff0000; // Right-most 16 bits
leftBits = leftBits >>> 16; // Shift the left-most 16 bits to a 16-bit value
return rightBits ^ leftBits; // XOR the left-most and right-most bits
}
Given the parameters of the problem, the best solution would have each 16-bit hash correspond to exactly 2^16 32-bit numbers. It would also IMO hash sequential 32-bit numbers differently. Unless I'm missing something, I believe this solution does those two things.
I would argue that security cannot be a consideration in this problem, as the hashed value is just too few bits. I believe that the solution I gave provides even distribution of 32-bit numbers to 16-bit hashes
This depends on the nature of the integers.
If they can contain some bit-masks, or can differ by powers of two, then simple XORs will have high probability of collisions.
You can try something like (i>>16) ^ ((i&0xffff) * p) with p being a prime number.
Security-hashes like MD5 are all good, but they are obviously an overkill here. Anything more complex than CRC16 is overkill.
I would say just apply a standard hash like sha1 or md5 and then grab the last 16 bits of that.
Assuming that you expect the least significant bits to 'vary' the most, I think you're probably going to get a good enough distribution by just using the lower 16-bits of the value as a hash.
If the numbers you're going to hash won't have that kind of distribution, then the additional step of xor-ing in the upper 16 bits might be helpful.
Of course this suggestion is if you're intending to use the hash merely for some sort of lookup/storage scheme and aren't looking for the crypto-related properties of non-guessability and non-reversability (which the xor-ing suggestions don't really buy you either).
Something simple like this....
function hash_32b_to_16b(val32b) {
var h = hmac(secretKey, sha512);
var v = val32b;
for(var i = 0; i < 4096; ++i)
v = h(v);
return v % 0xffff;
}

Categories

Resources