Changing a value gradually for animation - javascript

Let's suppose I have a variable dimension which I want to change from a value a to a value b.
a and b are integers and the first is major than the second.
To do this linearly I should do:
while (dimension < a)
dimension = dimension + 1
But what if I want it to grow exponentially? I totally suck at math and I can't figure this out. I was trying:
while (dimension < a)
dimension = dimension * dimension
the point is the value gets out of control almost immediately, it gets huge (way bigger than a)
Even if I try to use a huge scale (by setting a to a gigantic value) I can't get this done... And yet it seems these guys did it, and I think it should be easy too!
They also manage to do the same with all sorts of crazy equations all of which would be easily to me if only I were able to implement just one!
I just want a value to increase from 1 to 2 but gradually...
Hope someone can help, thanks!

One extremely simple way to do it is to have a second counter that gets incremented by a smaller amount each iteration, and multiply it by the original counter to set it. Example, in your case:
step = 1.2;
while(dimension < a) {
dimension = Math.max(dimension * step, a);
step += 0.1;
}
Tweaking the values will probably be necessary to achieve the exact desired effect, but that's the general idea of it.
The call to Math.max will make sure that the value never goes over your goal.

Related

Frame rate independent exponential interpolation

I am creating a game.
I usually start with quick-and-dirty get-the-job-done code, and improve it later.
And precisely, I am trying to improve that.
Let's present the problem: the player controls an aircraft. When pressing a key, the aircraft rotates (a "pitch") and goes down (or up, depending on the key pressed).
When the player releases the key, the aircraft goes back to horizontal.
The maximum angle for the aircraft should be reached quickly, it's not a simulation. Think of Starfox.
The quick and dirty approach is as follows: each frame, I check if a relevant key is pressed.
The output of this step is a variable that contains either 0, -1 or +1 whether the aircraft should be horizontal, going down or going up.
Now, I do the following formula:
pitch = pitch*0.9 + maxAngle * turn * 0.1
turn is the variable obtained containing 0, -1 or +1.
This produces a nice effect. It's an interpolation, but not a linear one, which makes it more "fun" to watch.
Here is the problem: this formula doesn't contain the length of a frame. It's frame rate dependent.
I tried to extract the general formula. First, 0.9 and 0.1 are obviously what they are because they sum up to 1 (I didn't try to have one of them lesser than 0 and or one bigger than 2).
If I put that
a1=a0*x + b*(1-x)
I arrive at the general formula of
an = a0*(x^n) + b*(1-x)*(x^(n-1) + x^(n-2) + x^(n-3) + ... + 1 )
By assuming a certain frame length I could inject it into that, maybe, somehow, but I still don't know how to properly turn that into a function (especially this sum of x^n, I don't know how to factorize it).
The second problem is that, currently, the user can press and release the key as they like. By that, I mean by that the b in the equation can change (and the resolution I attempted does not take that into account).
So, this is the problem. In short, how to reverse engineer my own solution to inject the frame length in it so it becomes frame rate independent?
Please note that you can assume that the frame rate in the current program IS stable.
I am quite sure I am not the first one encountering such a problem. If you don't have a solution, hints are also welcome, of course.
Thanks

Most efficient way to fill a typed array

I have a lot of points represented by an id and their position as lat long coordinates. Every 10 seconds, a point is able to move and its position is updated. There may be 100,000 points at most but this number can also vary through time.
I need to send all this data to a user when it connects to my server.
To achieve this, I put them in a typed array like so :
var data = new Float32Array(points.length * 3);
for (var i = 0; i < points.length;i++) {
data[i*3]=points[i].id;
data[i*3+1]=points[i].position.lat;
data[i*3+2]=points[i].position.long;
}
The thing is, doing this takes about 0.9 millisecond on my computer which isn't so bad but not quite satisfying. The surprising thing is that of course when lines 3,4 and 5 are commented out, the algorithm is a lot faster, but once one is not commented, it doesn't change a lot whether the two others are or not, the time will be very similar.
I would like to know if I am doing something wrong here or if there is a faster way to achieve what I want.
Thanks in advance,

Finding the largest possible scaling factor for squares that are box packed given the dimensions of the box

My problem is as follows:
I have a set of values V1, V2, ... Vn
I have a function f(V) = g * V, where g is a scaling factor, that maps these values to another set of values A1, A2, ... An. These values correspond to areas of squares.
I also have W (width) and H (height) variables. And finally, I have a box packing algorithm (This one to be specific), that takes the W and H variables, and the A1 ... An areas, and tries to find a way to pack the areas into a box of size W x H. If the areas A are not too big, and the box packing algorithm successfully manages to fit the areas into the box, it will return the positions of the squares (the left-top coordinates, but this is not relevant). If the areas are too big, it will return nothing.
Given the values V and the dimensions of the box W and H, what is the highest value of g (the scaling factor in f(V)) that still fits the box?
I have tried to create an algorithm that initally sets g to (W x H) / sum(V1, V2, ... Vn). If the values V are distributed in such a way that they fit exactly into the box without leaving any space in between, this would give me a solution instantly. In reality this never happens, but it seems like a good starting point. With this initial value of g I would calculate the values A which are then fed to the box packing algorithm. The box packing algorithm will fail (return nothing), after which I decrease g by 0.01 (a completely arbitrary value established by trial and error) and try again. This cycle repeats until the box packing algorithm succeeds.
While this solution works, I feel like there should be faster and more accurate ways to determine g. For example, depending on how big W and H compared to the sum of the values V, it seems that there should be a way to determine a better value than 0.01, because if the difference is extremely big the algorithm would take really long, while if the difference is extremely small it would be very fast but very crude. In addition, I feel like there should be a more efficient method than just brute-forcing it like this. Any ideas?
You're on a good trail with your method I think !
I think you shouldn't decrease your value by a fixed amount but rather try to approach the value by ever smaller steps.
It's good because you have a good starting value. First you could decrease g by something like 0.1 * g, check if your packing succeeds, if not, continue to decrease with same step, else if it packs correctly increase g with a smaller step (like step = step / 2)
At some point your steps will become very small and you can stop searching (defining "small" is up to you)
You can use binary search approach. If you have two values of g, so that for one (g1) packing exists and for second (g2) packing doesn't exist, try value on half way h=(g1+g2)/2. If packing exists for h you get new larger final g, and you can make same check with h and g2. If packing doesn't exist you can make same check with values g1 and h.
With each step, interval of possible result max value, is halfed. You can get final result as precise as you like, with more iterations.

Improving Javascript Mandelbrot Plot Performance

I have written a javascript program which plots the Mandelbrot set with the normal pretty colours approaching the boundary. I was planning on adding a zoom function next but it is far far too slow for this to be sensible.
I have posted the most important portion of the code here:
while (x * x < 2 && y * y < 2 && iteration < max_iteration) {
xtemp = (x * x) - (y * y) + xcord;
y = (2 * x * y) + ycord;
x = xtemp;
iteration = iteration + 1;
}
and have linked to a jsfiddle with the whole page here: http://jsfiddle.net/728dn2m0/
I have a few global variables which I could take into the main loop but that would result in additional calculations for every single pixel. I read on another SO question that an alternative to the 1x1 rectangle was to use image data but the performance difference was disputed. Another possibility would be rewriting the while statement as some other conditional loop but I'm not convinced that would give me the gains I'm looking for.
I'd consider myself a newbie so I'm happy to hear comments on any aspect of the code but what I'm really after is something which will massively increase performance. I suspect I'm being unreasonable in my expectations of what javascript in the browser can manage but I hope that I'm missing something significant and there are big gains to be found.
Thanks in advance,
Andrew
Using setInterval as an external loop construct slows the calculation down. You set it to 5 ms for a single pixel, however the entire Mandelbrot map calculation can be done within 1 second. So a single call to your function draws a pixel very quickly and then waits about the 99.99% of that 5 milliseconds.
I replaced
if (m<=(width+1))
with
while (m<=(width+1))
and removed the setInterval.
This way the entire calculation is done in one step, without refresh to the screen and without using setInterval as an external loop construct. I forked your script and modified it: http://jsfiddle.net/karatedog/2o4gjrv2/6/
In the script I modified the bailout condition from (x*x < 2 && y*y < 2) to (x < 2 && y < 2) just as I suggested in a previous comment and revealed some hidden pixel, check the difference!
I had indeed missed something significant. The timer I had used in order to prevent the page hanging while the set was plotted was limiting the code to one pixel every 5 milliseconds, in other words 200 pixels per second. Not clever!
I am now plotting one line at a time and it runs a lot better. Not yet real time but it is a lot quicker.
Thanks for the ideas. I will look into the escape condition to see what it should be.
A new jsfiddle with the revised code is here: http://jsfiddle.net/da1qyh9y/
and the for statement I've added is here:
function main_loop() {
if (m<=(width+1)) {
var n;
for (n=0; n<height; n=n+1) {

Calculating bytes per second (the smooth way)

I am looking for a solution to calculate the transmitted bytes per second of a repeatedly invoked function (below). Due to its inaccuracy, I do not want to simply divide the transmitted bytes by the elapsed overall time: it resulted in the inability to display rapid speed changes after running for a few minutes.
The preset (invoked approximately every 50ms):
function uploadProgress(loaded, total){
var bps = ?;
$('#elem').html(bps+' bytes per second');
};
How to obtain the average bytes per second for (only) the last n seconds and is it a good idea?
What other practices for calculating a non-flickering but precise bps value are available?
Your first idea is not bad, it's called a moving average, and providing you call your update function in regular intervals you only need to keep a queue (a FIFO buffer) of a constant length:
var WINDOW_SIZE = 10;
var queue = [];
function updateQueue(newValue) {
// fifo with a fixed length
queue.push(newValue);
if (queue.length > WINDOW_SIZE)
queue.shift();
}
function getAverageValue() {
// if the queue has less than 10 items, decide if you want to calculate
// the average anyway, or return an invalid value to indicate "insufficient data"
if (queue.length < WINDOW_SIZE) {
// you probably don't want to throw if the queue is empty,
// but at least consider returning an 'invalid' value in order to
// display something like "calculating..."
return null;
}
// calculate the average value
var sum = 0;
for (var i = 0; i < queue.length; i++) {
sum += queue[i];
}
return sum / queue.length;
}
// calculate the speed and call `updateQueue` every second or so
var updateTimer = setInterval(..., 1000);
An even simpler way to avoid sudden changes in calculated speed would be to use a low-pass filter. A simple discrete approximation of the PT1 filter would be:
Where u[k] is the input (or actual value) at sample k, y[k] is the output (or filtered value) at sample k, and T is the time constant (larger T means that y will follow u more slowly).
That would be translated to something like:
var speed = null;
var TIME_CONSTANT = 5;
function updateSpeed(newValue) {
if (speed === null) {
speed = newValue;
} else {
speed += (newValue - speed) / TIME_CONSTANT;
}
}
function getFilteredValue() {
return speed;
}
Both solutions will give similar results (for your purpose at least), and the latter one seems a bit simpler (and needs less memory).
Also, I wouldn't update the value that fast. Filtering will only turn "flickering" into "swinging" at a refresh rate of 50ms. I don't think anybody expects to have an upload speed shown at a refresh rate of more than once per second (or even a couple of seconds).
A simple low-pass filter is ok for just making sure that inaccuracies don't build up. But if you think a little deeper about measuring transfer rates, you get into maintaining separate integer counters to do it right.
If you want it to be an exact count, note that there is a simplification available. First, when dealing with rates, arithmetic mean of them is the wrong thing to apply to bytes/sec (sec/byte is more correct - which leads to harmonic mean). The other problem is that they should be weighted. Because of this, simply keeping int64 running totals of bytes versus observation time actually does the right thing - as stupid as it sounds. Normally, you are weighting by 1/n for each w. Look at a neat simplification that happens when you weigh by time:
(w0*b0/t0 + w1*b1/t1 + w2*b2/t2 + ...)/(w0+w1+w2+...)
totalBytes/totalWeight
(b0+b1+b2+...)/(w0+w1+w2+...)
So just keep separate (int64!) totals of bytes and milliseconds. And only divide them as a rendering step to visualize the rate. Note that if you instead used the harmonic mean (which you should do for rates - because you are really averaging sec/byte), then that's the same as the time it takes to send a byte, weighted by how many bytes there were.
1 / (( w0*t0/b0 + w1*t1/b0 + ... )/(w0+w1+w2+...)) =
totalBytes/totalTime
So arithmetic mean weighted by time is same as harmonic mean weighted by bytes. Just keep a running total of bytes in one var, and time in another. There is a deeper reason that this simplistic count actually the right one. Think of integrals. Assuming no concurrency, this is literally just total bytes transferred divided by total observation time. Assume that the computer actually takes 1 step per millisecond, and only sends whole bytes - and that you observe the entire time interval without gaps. There are no approximations.
Notice that if you think about an integral with (msec, byte/msec) as the units for (x,y), the area under the curve is the bytes sent during the observation period (exactly). You will get the same answer no matter how the observations got cut up. (ie: reported 2x as often).
So by simply reporting (size_byte, start_ms,stop_ms), you just accumulate (stop_ms-start_ms) into time and accumulate size_byte per observation. If you want to partition these rates to graph in minute buckets, then just maintain the (byte,ms) pair per minute (of observation).
Note that these are rates experienced for individual transfers. The individual transfers may experience 1MB/s (user point of view). These are the rates that you guarantee to end users.
You can leave it here for simple cases. But doing this counting right, allows for more interesting things.
From the server point of view, load matters. Presume that there were two users experiencing 1MB/s simultaneously. For that statistic, you need to subtract out the double-counted time. If 2 users do 1MB/s simultaneously for 1s, then that's 2MB/s for 1s. You need to effectively reconstruct time overlaps, and subtract out the double-counting of time periods. Explicitly logging at the end of a transfer (size_byte,start_ms,stop_ms) allows you to measure interesting things:
The number of outstanding transfers at any given time (queue length distribution - ie: "am I going to run out of memory?")
The throughput as a function of the number of transfers (throughput for a queue length - ie: "does the website collapse when our ad shows on TV?")
Utilization - ie: "are we overpaying our cloud provider?"
In this situation, all of the accumulated counters are exact integer arithmetic. Subtracting out the double-counted time suddenly gets you into more complicated algorithms (when computed efficiently and in real-time).
Use a decaying average, then you won't have to keep the old values around.
UPDATE: Basically it's a formula like this:
average = new_value * factor + average_old * (100 - factor);
You don't have to keep any old values around, they're all in the there at smaller and smaller proportions. You have to choose a value for factor that are appropriate to the mix of new and old values you want, and how often the average gets updated.
This is how the Unix "load average" is calculated I believe.

Categories

Resources