How to interpolate progress based on discrete steps? - javascript

In a web app some task takes multiple sequential ajax call steps to complete. Each takes 12–18 seconds.
I want to present user a progress indicator that is making frequent little steps instead.
My first bet was to assume a linear progress = k * time function, self-adjusting the k on each new response. Fiddle emulated the response times with random values in a +- 4s range.
This approach appears wrong: in cases with a relatively long response after a serie of fast ones, progress takes a negative step to catch up with real pace.
Feels like a function should go in "waves": faster at beginning, slowing down closer to the end to extent in case of a delayed checkpoint.
What is the best practice for such a task?

I'm going to focus on the mathematical part of your request:
a function should go in "waves": faster at beginning, slowing down closer to the end to extent in case of a delayed checkpoint
Translation: a monotonic function (i.e. no backward steps) with a positive initial gradient and asymptotic behaviour for large x.
Consider atan(theta): it has a gradient of 1 for small x, and asymptotically approaches π/2 for large x. We can scale it so that the expected end happens when the chunk is at some fraction of the available length - that is, if you expect it to take 4s, and it takes 4s, it can jump the remainder. If it takes longer, this remainder is the bit we will asymptotically eat up. Thus:
function chunkProgressFraction(expectedEndT, currentT, expectationFraction) {
// validate
if(!expectedEndT) { return 0; }
// defaults
if(!expectationFraction) { expectationFraction = 0.85 }
// y = k atan(mx)
// to reach 1.0 at large x:
// 1.0 = k . atan(+lots) = k . pi/2
var k = 2.0 / Math.PI;
// scale the function so that the expectationFraction result happens
// at expectedEndT, i.e.
// expectationFraction = k * atan(expectedEndT * m)
// expectedEndT * m = tan(expectationFraction / k)
var m = Math.tan(expectationFraction / k) / expectedEndT;
return k * Math.atan(m * currentT);
}
So if we want to get to 100 pixels, and we expect it to take 4s, and we want 20% slack:
progressPixelsThisChunk = 100.0 *
chunkProgressValue(4000.0, thisChunkTimeInMilliseconds, 0.8);
By all means scale the expectedEndT using the time taken by the previous chunk.

Related

Algorithm to get progressively closer to a number without ever reaching it

I need to have a number that gets updated progressively closer to the max (or min) value, without ever reaching it. I also would like each update to have a smooth transition, like a curve or something (I have never studied math or computer science so I don't know the correct terminology)
Here's what I got so far but it obviously doesn't work:
let numberToUpdate = 5 // this number will vary after each update
const numberToUpdateMin = 1
const numberToUpdateMax = 10
let someValueA = 100 // this number will change randomly between updates
let someValueB = 50 // this number will change randomly between updates
function updateNumber() {
let differenceBetweenValues = someValueA - someValueB
if (differenceBetweenValues > 0) {
// make numberToUpdate closer to numberToUpdateMax (without ever reaching it)
numberToUpdate += (numberToUpdateMax - numberToUpdate) * (someValueA / someValueB) // this doesn't work at all
}
else if (differenceBetweenValues < 0) {
// make numberToUpdate closer to numberToUpdateMin (without ever reaching it)
numberToUpdate -= (numberToUpdate - numberToUpdateMin) * (someValueB / someValueA) // this doesn't work at all
}
}
any help would be greatly appreciated, I have no clue what I'm doing or what terms I should be googling to arrive to a suitable result.
Edit: It doesn't have to work with infinitely small/big numbers, it could have a cap.
Here is a simple example of an asymptotic function:
function generateAsymptotic(step) {
return step / (step + 1);
}
As step gets bigger, generateAsymptotic will get closer and closer to 1, but never be 1.
On a computer, this is impossible, because the number of distinct values representable is finite, so in the end you will reach the forbidden number.

How to implement a deterministic/tick-based game loop?

First of all, I'm trying to make a "simple" 3D game using Three.js and in the future some network framework to make it multiplayer, since I plan on doing the network part in the future I searched a little and discovered that most "action" game use a "tick" based game loop to make it possible to sync the clients and the server, and then interpolate between the ticks to make it smooth.
I already have some "working" code of the tick (handle input, update, draw) function, what I want to know is if my implementation is right, and how this "deterministic" loop should work, supposing that my implementation is working, when I increase the "tick rate" the game gets faster (the update function is running more times), is this right?
this.loops = 0;
this.tick_rate = 20;
this.skip_ticks = 1000 / this.tick_rate;
this.max_frame_skip = 10;
this.next_game_tick = performance.now();
This first part of the code is inside the constructor of the Game class
Game.prototype.run = function () {
this.handle_input();
this.loops = 0;
while (performance.now() > this.next_game_tick && this.loops < this.max_frame_skip){
this.up_stats.update();
this.update();
this.next_game_tick += this.skip_ticks;
this.loops++;
}
this.draw();
//monitor performance
this.stats.update();
//next update
requestAnimationFrame(this.run.bind(this));
};
Full code at: https://github.com/derezzedex/first_three_js/blob/master/js/game/main.js
This looks pretty reasonable to me and I've used similar patterns in the past.
Structuring synchronized simulations is a huge topic, but what you have is a good starting point and might be enough depending on the complexity of your game.
edit: A bit more detail...
Yes it works the same, except that this.dt is always the same. i.e. 1000 / your desired FPS for the game loop.
If you want to do the smoothing/interpolation in between frames... you'll have to record the previous state of your object as well.. and you probably won't want to use the Euler rotations, since eulers don't interpolate well. because an angle of 360 degrees, flips back to 0, so the interpolation logic gets weird...
But instead.. you can record the state before and after the update...
and interpolate the .quaternion instead.. which for small changes in rotation works fine just linear interpolating .. If the changes are too big, you can use quaternion.slerp() which can handle interpolating over big distances.
So you've got lastTickTime, and currentTime, and nextTickTime ....
each frame.. you're doing something like:
To interpolate you do something like:
var alpha= (currentTime-lastTickTime) / (nextTickTime-lastTickTime);//nextTickTime-lastTickTime = your framerate delta so for 60fps = 1000/60 = 16.666666666666668
var recip = 1.0 - alpha;
object.position.x = (object.lastPosition.x * recip)+(object.nextPosition.x*alpha)
object.position.y = (object.lastPosition.y * recip)+(object.nextPosition.y*alpha)
object.position.z = (object.lastPosition.z * recip)+(object.nextPosition.z*alpha)
object.scale.x = (object.lastScale.x * recip)+(object.nextScale.x*alpha)
object.scale.y = (object.lastScale.y * recip)+(object.nextScale.y*alpha)
object.scale.z = (object.lastScale.z * recip)+(object.nextScale.z*alpha)
object.quaternion.x = (object.lastQuaternion.x * recip)+(object.nextQuaternion.x*alpha)
object.quaternion.y = (object.lastQuaternion.y * recip)+(object.nextQuaternion.y*alpha)
object.quaternion.z = (object.lastQuaternion.z * recip)+(object.nextQuaternion.z*alpha)
object.quaternion.w = (object.lastQuaternion.w * recip)+(object.nextQuaternion.w*alpha)
In a proper three app, you probably shouldn't store the lastPosition and nextPosition directly on the object, and instead put it in the object.userData, but whatever.. it will probably still work..

How to calculate bezier curve control points that avoid objects?

Specifically, I'm working in canvas with javascript.
Basically, I have objects which have boundaries that I want to avoid, but still surround with a bezier curve. However, I'm not even sure where to begin to write an algorithm that would move control points to avoid colliding.
The problem is in the image below, even if you're not familiar with music notation, the problem should still be fairly clear. The points of the curve are the red dots
Also, I have access to the bounding boxes of each note, which includes the stem.
So naturally, collisions must be detected between the bounding boxes and the curves (some direction here would be good, but I've been browsing and see that there's a decent amount of info on this). But what happens after collisions have been detected? What would have to happen to calculate control points locations to make something that looked more like:
Bezier approach
Initially the question is a broad one - perhaps even to broad for SO as there are many different scenarios that needs to be taken into consideration to make a "one solution that fits them all". It's a whole project in its self. Therefor I will present a basis for a solution which you can build upon - it's not a complete solution (but close to one..). I added some suggestions for additions at the end.
The basic steps for this solutions are:
Group the notes into two groups, a left and a right part.
The control points are then based on the largest angle from the first (end) point and distance to any of the other notes in that group, and the last end point to any point in the second group.
The resulting angles from the two groups are then doubled (max 90°) and used as basis to calculate the control points (basically a point rotation). The distance can be further trimmed using a tension value.
The angle, doubling, distance, tension and padding offset will allow for fine-tuning to get the best over-all result. There might be special cases which need additional conditional checks but that is out of scope here to cover (it won't be a full key-ready solution but provide a good basis to work further upon).
A couple of snapshots from the process:
The main code in the example is split into two section, two loops that parses each half to find the maximum angle as well as the distance. This could be combined into a single loop and have a second iterator to go from right to middle in addition to the one going from left to middle, but for simplicity and better understand what goes on I split them into two loops (and introduced a bug in the second half - just be aware. I'll leave it as an exercise):
var dist1 = 0, // final distance and angles for the control points
dist2 = 0,
a1 = 0,
a2 = 0;
// get min angle from the half first points
for(i = 2; i < len * 0.5 - 2; i += 2) {
var dx = notes[i ] - notes[0], // diff between end point and
dy = notes[i+1] - notes[1], // current point.
dist = Math.sqrt(dx*dx + dy*dy), // get distance
a = Math.atan2(dy, dx); // get angle
if (a < a1) { // if less (neg) then update finals
a1 = a;
dist1 = dist;
}
}
if (a1 < -0.5 * Math.PI) a1 = -0.5 * Math.PI; // limit to 90 deg.
And the same with the second half but here we flip around the angles so they are easier to handle by comparing current point with end point instead of end point compared with current point. After the loop is done we flip it 180°:
// get min angle from the half last points
for(i = len * 0.5; i < len - 2; i += 2) {
var dx = notes[len-2] - notes[i],
dy = notes[len-1] - notes[i+1],
dist = Math.sqrt(dx*dx + dy*dy),
a = Math.atan2(dy, dx);
if (a > a2) {
a2 = a;
if (dist2 < dist) dist2 = dist; //bug here*
}
}
a2 -= Math.PI; // flip 180 deg.
if (a2 > -0.5 * Math.PI) a2 = -0.5 * Math.PI; // limit to 90 deg.
(the bug is that longest distance is used even if a shorter distance point has greater angle - I'll let it be for now as this is meant as an example. It can be fixed by reversing the iteration.).
The relationship I found works good is the angle difference between the floor and the point times two:
var da1 = Math.abs(a1); // get angle diff
var da2 = a2 < 0 ? Math.PI + a2 : Math.abs(a2);
a1 -= da1*2; // double the diff
a2 += da2*2;
Now we can simply calculate the control points and use a tension value to fine tune the result:
var t = 0.8, // tension
cp1x = notes[0] + dist1 * t * Math.cos(a1),
cp1y = notes[1] + dist1 * t * Math.sin(a1),
cp2x = notes[len-2] + dist2 * t * Math.cos(a2),
cp2y = notes[len-1] + dist2 * t * Math.sin(a2);
And voila:
ctx.moveTo(notes[0], notes[1]);
ctx.bezierCurveTo(cp1x, cp1y, cp2x, cp2y, notes[len-2], notes[len-1]);
ctx.stroke();
Adding tapering effect
To create the curve more visually pleasing a tapering can be added simply by doing the following instead:
Instead of stroking the path after the first Bezier curve has been added adjust the control points with a slight angle offset. Then continue the path by adding another Bezier curve going from right to left, and finally fill it (fill() will close the path implicit):
// first path from left to right
ctx.beginPath();
ctx.moveTo(notes[0], notes[1]); // start point
ctx.bezierCurveTo(cp1x, cp1y, cp2x, cp2y, notes[len-2], notes[len-1]);
// taper going from right to left
var taper = 0.15; // angle offset
cp1x = notes[0] + dist1*t*Math.cos(a1-taper);
cp1y = notes[1] + dist1*t*Math.sin(a1-taper);
cp2x = notes[len-2] + dist2*t*Math.cos(a2+taper);
cp2y = notes[len-1] + dist2*t*Math.sin(a2+taper);
// note the order of the control points
ctx.bezierCurveTo(cp2x, cp2y, cp1x, cp1y, notes[0], notes[1]);
ctx.fill(); // close and fill
Final result (with pseudo notes - tension = 0.7, padding = 10)
FIDDLE
Suggested improvements:
If both groups' distances are large, or angles are steep, they could probably be used as a sum to reduce tension (distance) or increase it (angle).
A dominance/area factor could affect the distances. Dominance indicating where the most tallest parts are shifted at (does it lay more in the left or right side, and affects tension for each side accordingly). This could possibly/potentially be enough on its own but needs to be tested.
Taper angle offset should also have a relationship with the sum of distance. In some cases the lines crosses and does not look so good. Tapering could be replaced with a manual approach parsing Bezier points (manual implementation) and add a distance between the original points and the points for the returning path depending on array position.
Hope this helps!
Cardinal spline and filtering approach
If you're open to use a non-Bezier approach then the following can give an approximate curve above the note stems.
This solutions consists of 4 steps:
Collect top of notes/stems
Filter away "dips" in the path
Filter away points on same slope
Generate a cardinal spline curve
This is a prototype solution so I have not tested it against every possible combination there is. But it should give you a good starting point and basis to continue on.
The first step is easy, collect points representing the top of the note stem - for the demo I use the following point collection which slightly represents the image you have in the post. They are arranged in x, y order:
var notes = [60,40, 100,35, 140,30, 180,25, 220,45, 260,25, 300,25, 340,45];
which would be represented like this:
Then I created a simple multi-pass algorithm that filters away dips and points on the same slope. The steps in the algorithm are as follows:
While there is a anotherPass (true) it will continue, or until max number of passes set initially
The point is copied to another array as long as the skip flag isn't set
Then it will compare current point with next to see if it has a down-slope
If it does, it will compare the next point with the following and see if it has an up-slope
If it does it is considered a dip and the skip flag is set so next point (the current middle point) won't be copied
The next filter will compare slope between current and next point, and next point and the following.
If they are the same skip flag is set.
If it had to set a skip flag it will also set anotherPass flag.
If no points where filtered (or max passes is reached) the loop will end
The core function is as follows:
while(anotherPass && max) {
skip = anotherPass = false;
for(i = 0; i < notes.length - 2; i += 2) {
if (!skip) curve.push(notes[i], notes[i+1]);
skip = false;
// if this to next points goes downward
// AND the next and the following up we have a dip
if (notes[i+3] >= notes[i+1] && notes[i+5] <= notes[i+3]) {
skip = anotherPass = true;
}
// if slope from this to next point =
// slope from next and following skip
else if (notes[i+2] - notes[i] === notes[i+4] - notes[i+2] &&
notes[i+3] - notes[i+1] === notes[i+5] - notes[i+3]) {
skip = anotherPass = true;
}
}
curve.push(notes[notes.length-2], notes[notes.length-1]);
max--;
if (anotherPass && max) {
notes = curve;
curve = [];
}
}
The result of the first pass would be after offsetting all the points on the y-axis - notice that the dipping note is ignored:
After running through all necessary passes the final point array would be represented as this:
The only step left is to smoothen the curve. For this I have used my own implementation of a cardinal spline (licensed under MIT and can be found here) which takes an array with x,y points and smooths it adding interpolated points based on a tension value.
It won't generate a perfect curve but the result from this would be:
FIDDLE
There are ways to improve the visual result which I haven't addressed, but I will leave it to you to do that if you feel it's needed. Among those could be:
Find center of points and increase the offset depending on angle so it arcs more at top
The end points of the smoothed curve sometimes curls slightly - this can be fixed by adding an initial point right below the first point as well at the end. This will force the curve to have better looking start/end.
You could draw double curve to make a taper effect (thin beginning/end, thicker in the middle) by using the first point in this list on another array but with a very small offset at top of the arc, and then render it on top.
The algorithm was created ad-hook for this answer so it's obviously not properly tested. There could be special cases and combination throwing it off but I think it's a good start.
Known weaknesses:
It assumes the distance between each stem is the same for the slope detection. This needs to be replaced with a factor based comparison in case the distance varies within a group.
It compares the slope with exact values which may fail if floating point values are used. Compare with an epsilon/tolerance

Calculating bytes per second (the smooth way)

I am looking for a solution to calculate the transmitted bytes per second of a repeatedly invoked function (below). Due to its inaccuracy, I do not want to simply divide the transmitted bytes by the elapsed overall time: it resulted in the inability to display rapid speed changes after running for a few minutes.
The preset (invoked approximately every 50ms):
function uploadProgress(loaded, total){
var bps = ?;
$('#elem').html(bps+' bytes per second');
};
How to obtain the average bytes per second for (only) the last n seconds and is it a good idea?
What other practices for calculating a non-flickering but precise bps value are available?
Your first idea is not bad, it's called a moving average, and providing you call your update function in regular intervals you only need to keep a queue (a FIFO buffer) of a constant length:
var WINDOW_SIZE = 10;
var queue = [];
function updateQueue(newValue) {
// fifo with a fixed length
queue.push(newValue);
if (queue.length > WINDOW_SIZE)
queue.shift();
}
function getAverageValue() {
// if the queue has less than 10 items, decide if you want to calculate
// the average anyway, or return an invalid value to indicate "insufficient data"
if (queue.length < WINDOW_SIZE) {
// you probably don't want to throw if the queue is empty,
// but at least consider returning an 'invalid' value in order to
// display something like "calculating..."
return null;
}
// calculate the average value
var sum = 0;
for (var i = 0; i < queue.length; i++) {
sum += queue[i];
}
return sum / queue.length;
}
// calculate the speed and call `updateQueue` every second or so
var updateTimer = setInterval(..., 1000);
An even simpler way to avoid sudden changes in calculated speed would be to use a low-pass filter. A simple discrete approximation of the PT1 filter would be:
Where u[k] is the input (or actual value) at sample k, y[k] is the output (or filtered value) at sample k, and T is the time constant (larger T means that y will follow u more slowly).
That would be translated to something like:
var speed = null;
var TIME_CONSTANT = 5;
function updateSpeed(newValue) {
if (speed === null) {
speed = newValue;
} else {
speed += (newValue - speed) / TIME_CONSTANT;
}
}
function getFilteredValue() {
return speed;
}
Both solutions will give similar results (for your purpose at least), and the latter one seems a bit simpler (and needs less memory).
Also, I wouldn't update the value that fast. Filtering will only turn "flickering" into "swinging" at a refresh rate of 50ms. I don't think anybody expects to have an upload speed shown at a refresh rate of more than once per second (or even a couple of seconds).
A simple low-pass filter is ok for just making sure that inaccuracies don't build up. But if you think a little deeper about measuring transfer rates, you get into maintaining separate integer counters to do it right.
If you want it to be an exact count, note that there is a simplification available. First, when dealing with rates, arithmetic mean of them is the wrong thing to apply to bytes/sec (sec/byte is more correct - which leads to harmonic mean). The other problem is that they should be weighted. Because of this, simply keeping int64 running totals of bytes versus observation time actually does the right thing - as stupid as it sounds. Normally, you are weighting by 1/n for each w. Look at a neat simplification that happens when you weigh by time:
(w0*b0/t0 + w1*b1/t1 + w2*b2/t2 + ...)/(w0+w1+w2+...)
totalBytes/totalWeight
(b0+b1+b2+...)/(w0+w1+w2+...)
So just keep separate (int64!) totals of bytes and milliseconds. And only divide them as a rendering step to visualize the rate. Note that if you instead used the harmonic mean (which you should do for rates - because you are really averaging sec/byte), then that's the same as the time it takes to send a byte, weighted by how many bytes there were.
1 / (( w0*t0/b0 + w1*t1/b0 + ... )/(w0+w1+w2+...)) =
totalBytes/totalTime
So arithmetic mean weighted by time is same as harmonic mean weighted by bytes. Just keep a running total of bytes in one var, and time in another. There is a deeper reason that this simplistic count actually the right one. Think of integrals. Assuming no concurrency, this is literally just total bytes transferred divided by total observation time. Assume that the computer actually takes 1 step per millisecond, and only sends whole bytes - and that you observe the entire time interval without gaps. There are no approximations.
Notice that if you think about an integral with (msec, byte/msec) as the units for (x,y), the area under the curve is the bytes sent during the observation period (exactly). You will get the same answer no matter how the observations got cut up. (ie: reported 2x as often).
So by simply reporting (size_byte, start_ms,stop_ms), you just accumulate (stop_ms-start_ms) into time and accumulate size_byte per observation. If you want to partition these rates to graph in minute buckets, then just maintain the (byte,ms) pair per minute (of observation).
Note that these are rates experienced for individual transfers. The individual transfers may experience 1MB/s (user point of view). These are the rates that you guarantee to end users.
You can leave it here for simple cases. But doing this counting right, allows for more interesting things.
From the server point of view, load matters. Presume that there were two users experiencing 1MB/s simultaneously. For that statistic, you need to subtract out the double-counted time. If 2 users do 1MB/s simultaneously for 1s, then that's 2MB/s for 1s. You need to effectively reconstruct time overlaps, and subtract out the double-counting of time periods. Explicitly logging at the end of a transfer (size_byte,start_ms,stop_ms) allows you to measure interesting things:
The number of outstanding transfers at any given time (queue length distribution - ie: "am I going to run out of memory?")
The throughput as a function of the number of transfers (throughput for a queue length - ie: "does the website collapse when our ad shows on TV?")
Utilization - ie: "are we overpaying our cloud provider?"
In this situation, all of the accumulated counters are exact integer arithmetic. Subtracting out the double-counted time suddenly gets you into more complicated algorithms (when computed efficiently and in real-time).
Use a decaying average, then you won't have to keep the old values around.
UPDATE: Basically it's a formula like this:
average = new_value * factor + average_old * (100 - factor);
You don't have to keep any old values around, they're all in the there at smaller and smaller proportions. You have to choose a value for factor that are appropriate to the mix of new and old values you want, and how often the average gets updated.
This is how the Unix "load average" is calculated I believe.

How do I increase the amplitude of this elastic animation effect?

I'm using Raphael.js to animate an SVG circle's radius on hover. I like the stock elastic effect that the library offers, but I'd like to increase the amplitude - i.e., make the circle grow and shrink with a lot more gusto when it's hovered - not with extra speed, but to grow larger and shrink smaller when the effect runs.
I copied the elastic function and renamed it super_elastic, and have been tinkering with it here:
http://jsfiddle.net/ryWH3/
I have no idea how the function works, so I've just been tinkering with its numerical values to see what happens. So far I haven't found anything that appears to do what I want. Can anyone recommend any modifications to the function (or a different function altogether) that might do what I'm looking for?
Thanks!
UPDATE:
Thanks for the replies! Sorry, I may not have explained this very well. I'm guessing that the statement "grow larger and shrink smaller" was especially misleading.
I'm aware that the r property affects the final radius of the circle after the animation runs; what I'm trying to do, though, is make the elastic animation "bounce" with greater amplitude. That is, while the animation will still start and end at the same r values that I've set for the circle, I'd like the elastic transition to be a lot more dramatic - expand and contract the circle much more aggressively during the transition before arriving at the final r values. To do this, I'm assuming that I need to modify the equation used in the elastic function to make the effect more dramatic.
Hopefully that makes sense - it's kind of hard to explain without showing an example, but if I had an example, I wouldn't have needed to post this question. ;-)
OK, based on your clarification, here's a new answer. To expand the effect of the easing (amplification), you need to multiply the easing result with a multiplier like this.
return 6 * Math.pow(2, -10 * n) * Math.sin((n - .075) * (2 * Math.PI) / .3) + 1;
But, when you do that, you find that the large part of the amplification goes too fast. The small part goes slow and the large part goes fast. So, the pace when it's larger needs to be changed. My guess (which seems to work) was to change Math.sin() to Math.cos() because that shifts the phase and it seems to work as you can see here: http://jsfiddle.net/jfriend00/fuaNp/39/.
return 6 * Math.pow(2, -10 * n) * Math.cos((n - .075) * (2 * Math.PI) / .3) + 1;
Other things to understand about this easing function. This part:
(2 * Math.PI) / .3
determines how many bounce cycles there are. The larger that multipler is, the more bounces it will have (but the faster the bounces will go). The smaller that multipler is, the fewer bounces it will have (and the slower those bounces will go).
This part:
Math.pow(2, -10 * n)
determines how fast the bounce decays since this value gets smaller the larger n gets which negates the other multipliers as n gets large. So:
Math.pow(2, -5 * n)
makes it decay slower (you see more of the larger swings at the beginning and less of the smaller swings at the end.
To make the circle go larger when you hover over it, you change the hovered radius which I've upped to r: 100 here. To make the circle smaller, you change the initial size and the unhovered size from 25 to something smaller like this:
var paper = Raphael(0, 0, 300, 300),
circle = paper.circle(150, 150, 10); // <== change initial radius here to make it smaller
circle.attr({
'stroke': '#f00',
'stroke-width': 4,
'fill': '#fff'
});
circle.hover(function() {
this.animate({ r: 100 }, 1000, 'super_elastic'); // <== change enlarged size here
}, function() {
this.animate({ r: 10 }, 1000, 'super_elastic'); // <== change small size here
});
// no changes made to the easing function
Raphael.easing_formulas.super_elastic = function(n) {
if (n == !!n) {
return n;
}
return Math.pow(2, -10 * n) * Math.sin((n - .075) * (2 * Math.PI) / .3) + 1;
};
You can see it here: http://jsfiddle.net/jfriend00/fuaNp/.
The super_elastic() function is the easing function which controls what pace the animation moves at different parts of the cycle. Easing doesn't control the overall amplitude. That's done with the parameters to the animate() method.
If you wanted to slow down the animation, you would increase the time of the animation (make the two 1000 arguments to animate() be larger numbers. If you wanted to speed up the animation, you make those two numbers smaller. These are milliseconds for the animation. Smaller numbers means the animation runs in fewer milliseconds (which means faster).

Categories

Resources