WebAudioAPI: Dividing two AudioNodes' outputs - javascript

Context: I am trying to implement an ADSR envelope in WebAudioAPI where Attack, Decay, Sustain and Release are all AudioParams, and the 'note on' and 'note off' is represented by an input value of 1 and 0 respectively. I'm using four DynamicCompressor nodes and a lot of gain manipulation to achieve this, since compressors are technically Attack-Release envelopes.
Everything is going fine, except for the fact that I need to divide the level of one signal by the level of another signal to get the amount of gain to achieve the level offset combined with the DynamicCompressor that produces the decay gradient.
If it helps, here's the formula:
decayOffsetY = (1 - sustainLevel) * (attackDur + decayDur) / decayDur
Note that sustainLevel, attackDur and decayDur are all AudioParams.
Addition, subtraction and multiplication are all rather easily achievable using some ConstantSourceNodes and GainNodes, but how do I go about division?
Note: I've thought about using another DynamicCompressorNode to perform the division, since compressors technically divide the signal by a ratio, but this ratio is in the logarithmic scale, and I end up with a compression ratio of
log(decayDur) / 5
to achieve the value of 1 / decayDur which will be connected to another GainNode. But is it even possible to perform a Math.log using just AudioNodes?

Use a WaveShaperNode to compute either the inverse or the log. You'll have to figure out how to handle the case where the input is near zero and also how long to make the wave shaper curve array, but this should work.

Related

Frame rate independent exponential interpolation

I am creating a game.
I usually start with quick-and-dirty get-the-job-done code, and improve it later.
And precisely, I am trying to improve that.
Let's present the problem: the player controls an aircraft. When pressing a key, the aircraft rotates (a "pitch") and goes down (or up, depending on the key pressed).
When the player releases the key, the aircraft goes back to horizontal.
The maximum angle for the aircraft should be reached quickly, it's not a simulation. Think of Starfox.
The quick and dirty approach is as follows: each frame, I check if a relevant key is pressed.
The output of this step is a variable that contains either 0, -1 or +1 whether the aircraft should be horizontal, going down or going up.
Now, I do the following formula:
pitch = pitch*0.9 + maxAngle * turn * 0.1
turn is the variable obtained containing 0, -1 or +1.
This produces a nice effect. It's an interpolation, but not a linear one, which makes it more "fun" to watch.
Here is the problem: this formula doesn't contain the length of a frame. It's frame rate dependent.
I tried to extract the general formula. First, 0.9 and 0.1 are obviously what they are because they sum up to 1 (I didn't try to have one of them lesser than 0 and or one bigger than 2).
If I put that
a1=a0*x + b*(1-x)
I arrive at the general formula of
an = a0*(x^n) + b*(1-x)*(x^(n-1) + x^(n-2) + x^(n-3) + ... + 1 )
By assuming a certain frame length I could inject it into that, maybe, somehow, but I still don't know how to properly turn that into a function (especially this sum of x^n, I don't know how to factorize it).
The second problem is that, currently, the user can press and release the key as they like. By that, I mean by that the b in the equation can change (and the resolution I attempted does not take that into account).
So, this is the problem. In short, how to reverse engineer my own solution to inject the frame length in it so it becomes frame rate independent?
Please note that you can assume that the frame rate in the current program IS stable.
I am quite sure I am not the first one encountering such a problem. If you don't have a solution, hints are also welcome, of course.
Thanks

Higher precision in JavaScript

I am trying to calculate with higher precision numbers in JavaScript to be able to zoom in more on the Mandlebrot set.
(after a certain amount of zooming the results get "pixelated", because of the low precision)
I have looked at this question, so I tried using a library such as BigNumber but it was unusably slow.
I have been trying to figure this out for a while and I think the only way is to use a slow library.
Is there a faster library?
Is there any other way to calculate with higher precision numbers?
Is there any other way to be able to zoom in more on the Mandlebrot set?
Probably unneceseary to add this code, but this is the function I use to check if a point is in the Mandlebrot set.
function mandelbrot(x, y, it) {
var z = [0, 0]
var c1 = [x, y]
for (var i = 0; i < it; i++) {
z = [z[0]*z[0] - z[1]*z[1] + c1[0], 2*z[0]*z[1] + c1[1]]
if (Math.abs(z[0]) > 2, Math.abs(z[1]) > 2) {
break
}
}
return i
}
The key is not so much the raw numeric precision of JavaScript numbers (though that of course has its effects), but the way the basic Mandelbrot "escape" test works, specifically the threshold iteration counts. To compute whether a point in the complex plane is in or out of the set, you iterate on the formula (which I don't exactly remember and don't feel like looking up) for the point over and over again until the point obviously diverges (the formula "escapes" from the origin of the complex plane by a lot) or doesn't before the iteration threshold is reached.
The iteration threshold when rendering a view of the set that covers most of it around the origin of the complex plane (about 2 units in all directions from the origin) can be as low as 500 to get a pretty good rendering of the whole set at a reasonable magnification on a modern computer. As you zoom in, however, the iteration threshold needs to increase in inverse proportion to the size of the "window" onto the complex plane. If it doesn't, then the "escape" test doesn't work with sufficient accuracy to delineate fine details at higher magnifications.
The formula I used in my JavaScript implementation is
maxIterations = 400 * Math.log(1/dz0)
where dz0 is (arbitrarily) the width of the window onto the plane. As one zooms into a view of the set (well, the "edge" of the set, where things are interesting), dz0 gets pretty small so the iteration threshold gets up into the thousands.
The iteration count, of course, for points that do "escape" (that is, points that are not part of the Mandelbrot set) can be used as a sort of "distance" measurement. A point that escapes within a few iterations is clearly not "close to" the set, while a point that escapes only after 2000 iterations is much closer. That distance quality can be used in various ways in visualizations, either to provide a color value (common) or possibly a z-axis value if the set is being rendered as a 3D view (with the set as a sort of "mesa" in three dimensions and the borders being a vertical "cliff" off the sides).

Finding the largest possible scaling factor for squares that are box packed given the dimensions of the box

My problem is as follows:
I have a set of values V1, V2, ... Vn
I have a function f(V) = g * V, where g is a scaling factor, that maps these values to another set of values A1, A2, ... An. These values correspond to areas of squares.
I also have W (width) and H (height) variables. And finally, I have a box packing algorithm (This one to be specific), that takes the W and H variables, and the A1 ... An areas, and tries to find a way to pack the areas into a box of size W x H. If the areas A are not too big, and the box packing algorithm successfully manages to fit the areas into the box, it will return the positions of the squares (the left-top coordinates, but this is not relevant). If the areas are too big, it will return nothing.
Given the values V and the dimensions of the box W and H, what is the highest value of g (the scaling factor in f(V)) that still fits the box?
I have tried to create an algorithm that initally sets g to (W x H) / sum(V1, V2, ... Vn). If the values V are distributed in such a way that they fit exactly into the box without leaving any space in between, this would give me a solution instantly. In reality this never happens, but it seems like a good starting point. With this initial value of g I would calculate the values A which are then fed to the box packing algorithm. The box packing algorithm will fail (return nothing), after which I decrease g by 0.01 (a completely arbitrary value established by trial and error) and try again. This cycle repeats until the box packing algorithm succeeds.
While this solution works, I feel like there should be faster and more accurate ways to determine g. For example, depending on how big W and H compared to the sum of the values V, it seems that there should be a way to determine a better value than 0.01, because if the difference is extremely big the algorithm would take really long, while if the difference is extremely small it would be very fast but very crude. In addition, I feel like there should be a more efficient method than just brute-forcing it like this. Any ideas?
You're on a good trail with your method I think !
I think you shouldn't decrease your value by a fixed amount but rather try to approach the value by ever smaller steps.
It's good because you have a good starting value. First you could decrease g by something like 0.1 * g, check if your packing succeeds, if not, continue to decrease with same step, else if it packs correctly increase g with a smaller step (like step = step / 2)
At some point your steps will become very small and you can stop searching (defining "small" is up to you)
You can use binary search approach. If you have two values of g, so that for one (g1) packing exists and for second (g2) packing doesn't exist, try value on half way h=(g1+g2)/2. If packing exists for h you get new larger final g, and you can make same check with h and g2. If packing doesn't exist you can make same check with values g1 and h.
With each step, interval of possible result max value, is halfed. You can get final result as precise as you like, with more iterations.

ClojureScript: Get Average RGBA Color from ImagaData

I'm trying to write a function in ClojureScript, which returns the average RGBA value of a given ImageData Object.
In JavaScript, implementations for this problem with a "for" or "while" loop are very fast. Within milliseconds they return the average of e.g. 4000 x 4000 sized ImageData objects.
In ClojureScript my solutions are not approximately as fast, sometimes the browser gives up yielding "stack trace errors".
However the fastest one I wrote until now is this one:
(extend-type js/Uint8ClampedArray
ISeqable
(-seq [array] (array-seq array 0)))
(defn average-color [img-data]
(let [data (.-data img-data)
nr (/ (count data) 4)]
(->> (reduce (fn [m v] (-> (update-in m [:color (rem (:pos m) 4)] (partial + v))
(update-in [:pos] inc)))
{:color [0 0 0 0] :pos 0}
data)
(:color)
(map #(/ % nr)))))
Well, unfortunately it works only upt to values around 500x500, which is not acceptable.
I'm asking myself what exactly is the problem here. What do I have to attend in order to write a properly fast average-color function in ClojureScript.
The problem is that the function you have defined is recursive. I am not strong in clojurescript so I will not tell you haw to fix the problem in code but in concept.
You need to break the problem into smaller recursive units. So reduce a pixel row to get a result for each row, then reduce the row results. This will prevent the recursion from overflowing the call stack in javascript.
As for the speed, That will depend on how accurate you want the result to be, I would use a random sample selecting 10% of pixels randomly and using the average of that result.
You could also just use the hardware and scale the image by half, render it with smoothing on and then halving again until you have one pixel and use the value of that pixel. That will give you a pixel value average and is very fast but only does a value mean, not a photon mean.
I will point out that the value of the RGB channels are logarithmic and represent the square root of the photon count captured (for photo) or emitted (by screen). Thus the mean of the pixel values is much lower than the mean of the photon count. To get the correct mean you must get the mean of the square of each channel and then get the square root of the mean to bring it back to the logarithmic scale that is used for the RGB values.

Multiplayer Game - Client Interpolation Calculation?

I am creating a Multiplayer game using socket io in javascript. The game works perfectly at the moment aside from the client interpolation. Right now, when I get a packet from the server, I simply set the clients position to the position sent by the server. Here is what I have tried to do:
getServerInfo(packet) {
var otherPlayer = players[packet.id]; // GET PLAYER
otherPlayer.setTarget(packet.x, packet.y); // SET TARGET TO MOVE TO
...
}
So I set the players Target position. And then in the Players Update method I simply did this:
var update = function(delta) {
if (x != target.x || y != target.y){
var direction = Math.atan2((target.y - y), (target.x - x));
x += (delta* speed) * Math.cos(direction);
y += (delta* speed) * Math.sin(direction);
var dist = Math.sqrt((x - target.x) * (x - target.x) + (y - target.y)
* (y - target.y));
if (dist < treshhold){
x = target.x;
y = target.y;
}
}
}
This basically moves the player in the direction of the target at a fixed speed. The issue is that the player arrives at the target either before or after the next information arrives from the server.
Edit: I have just read Gabriel Bambettas Article on this subject, and he mentions this:
Say you receive position data at t = 1000. You already had received data at t = 900, so you know where the player was at t = 900 and t = 1000. So, from t = 1000 and t = 1100, you show what the other player did from t = 900 to t = 1000. This way you’re always showing the user actual movement data, except you’re showing it 100 ms “late”.
This again assumed that it is exactly 100ms late. If your ping varies a lot, this will not work.
Would you be able to provide some pseudo code so I can get an Idea of how to do this?
I have found this question online here. But none of the answers provide an example of how to do it, only suggestions.
I'm completely fresh to multiplayer game client/server architecture and algorithms, however in reading this question the first thing that came to mind was implementing second-order (or higher) Kalman filters on the relevant variables for each player.
Specifically, the Kalman prediction steps which are much better than simple dead-reckoning. Also the fact that Kalman prediction and update steps work somewhat as weighted or optimal interpolators. And futhermore, the dynamics of players could be encoded directly rather than playing around with abstracted parameterizations used in other methods.
Meanwhile, a quick search led me to this:
An improvement of dead reckoning algorithm using kalman filter for minimizing network traffic of 3d on-line games
The abstract:
Online 3D games require efficient and fast user interaction support
over network, and the networking support is usually implemented using
network game engine. The network game engine should minimize the
network delay and mitigate the network traffic congestion. To minimize
the network traffic between game users, a client-based prediction
(dead reckoning algorithm) is used. Each game entity uses the
algorithm to estimates its own movement (also other entities'
movement), and when the estimation error is over threshold, the entity
sends the UPDATE (including position, velocity, etc) packet to other
entities. As the estimation accuracy is increased, each entity can
minimize the transmission of the UPDATE packet. To improve the
prediction accuracy of dead reckoning algorithm, we propose the Kalman
filter based dead reckoning approach. To show real demonstration, we
use a popular network game (BZFlag), and improve the game optimized
dead reckoning algorithm using Kalman filter. We improve the
prediction accuracy and reduce the network traffic by 12 percents.
Might seem wordy and like a whole new problem to learn what it's all about... and discrete state-space for that matter.
Briefly, I'd say a Kalman filter is a filter that takes into account uncertainty, which is what you've got here. It normally works on measurement uncertainty at a known sample rate, but it could be re-tooled to work with uncertainty in measurement period/phase.
The idea being that in lieu of a proper measurement, you'd simply update with the kalman predictions. The tactic is similar to target tracking applications.
I was recommended them on stackexchange myself - took about a week to figure out how they were relevant but I've since implemented them successfully in vision processing work.
(...it's making me want to experiment with your problem now !)
As I wanted more direct control over the filter, I copied someone else's roll-your-own implementation of a Kalman filter in matlab into openCV (in C++):
void Marker::kalmanPredict(){
//Prediction for state vector
Xx = A * Xx;
Xy = A * Xy;
//and covariance
Px = A * Px * A.t() + Q;
Py = A * Py * A.t() + Q;
}
void Marker::kalmanUpdate(Point2d& measuredPosition){
//Kalman gain K:
Mat tempINVx = Mat(2, 2, CV_64F);
Mat tempINVy = Mat(2, 2, CV_64F);
tempINVx = C*Px*C.t() + R;
tempINVy = C*Py*C.t() + R;
Kx = Px*C.t() * tempINVx.inv(DECOMP_CHOLESKY);
Ky = Py*C.t() * tempINVy.inv(DECOMP_CHOLESKY);
//Estimate of velocity
//units are pixels.s^-1
Point2d measuredVelocity = Point2d(measuredPosition.x - Xx.at<double>(0), measuredPosition.y - Xy.at<double>(0));
Mat zx = (Mat_<double>(2,1) << measuredPosition.x, measuredVelocity.x);
Mat zy = (Mat_<double>(2,1) << measuredPosition.y, measuredVelocity.y);
//kalman correction based on position measurement and velocity estimate:
Xx = Xx + Kx*(zx - C*Xx);
Xy = Xy + Ky*(zy - C*Xy);
//and covariance again
Px = Px - Kx*C*Px;
Py = Py - Ky*C*Py;
}
I don't expect you to be able to use this directly though, but if anyone comes across it and understand what 'A', 'P', 'Q' and 'C' are in state-space (hint hint, state-space understanding is a pre-req here) they'll likely see how connect the dots.
(both matlab and openCV have their own Kalman filter implementations included by the way...)
This question is being left open with a request for more detail, so I’ll try to fill in the gaps of Patrick Klug’s answer. He suggested, reasonably, that you transmit both the current position and the current velocity at each time point.
Since two position and two velocity measurements give a system of four equations, it enables us to solve for a system of four unknowns, namely a cubic spline (which has four coefficients, a, b, c and d). In order for this spline to be smooth, the first and second derivatives (velocity and acceleration) should be equal at the endpoints. There are two standard, equivalent ways of calculating this: Hermite splines (https://en.wikipedia.org/wiki/Cubic_Hermite_spline) and Bézier splines (http://mathfaculty.fullerton.edu/mathews/n2003/BezierCurveMod.html). For a two-dimensional problem such as this, I suggested separating variables and finding splines for both x and y based on the tangent data in the updates, which is called a clamped piecewise cubic Hermite spline. This has several advantages over the splines in the link above, such as cardinal splines, which do not take advantage of that information. The locations and velocities at the control points will match, you can interpolate up to the last update rather than the one before, and you can apply this method just as easily to polar coordinates if the game world is inherently polar like Space wars. (Another approach sometimes used for periodic data is to perform a FFT and do trigonometric interpolation in the frequency domain, but that doesn’t sound applicable here.)
What originally appeared here was a derivation of the Hermite spline using linear algebra in a somewhat unusual way that (unless I made a mistake entering it) would have worked. However, the comments convinced me it would be more helpful to give the standard names for what I was talking about. If you are interested in the mathematical details of how and why this works, this is a better explanation: https://math.stackexchange.com/questions/62360/natural-cubic-splines-vs-piecewise-hermite-splines
A better algorithm than the one I gave is to represent the sample points and first derivatives as a tridiagonal matrix that, multiplied by a column vector of coefficients, produces the boundary conditions, and solve for the coefficients. An alternative is to add control points to a Bézier curve where the tangent lines at the sampled points intersect and on the tangent lines at the endpoints. Both methods produce the same, unique, smooth cubic spline.
One situation you might be able to avoid if you were choosing the points rather than receiving updates is if you get a bad sample of points. You can’t, for example, intersect parallel tangent lines, or tell what happened if it’s back in the same place with a nonzero first derivative. You’d never choose those points for a piecewise spline, but you might get them if an object made a swerve between updates.
If my computer weren’t broken right now, here is where I would put fancy graphics like the ones I posted to TeX.SX. Unfortunately, I have to bow out of those for now.
Is this better than straight linear interpolation? Definitely: linear interpolation will get you straight- line paths, quadratic splines won't be smooth, and higher-order polynomials will likely be overfitted. Cubic splines are the standard way to solve that problem.
Are they better for extrapolation, where you try to predict where a game object will go? Possibly not: this way, you’re assuming that a player who’s accelerating will keep accelerating, rather than that they will immediately stop accelerating, and that could put you much further off. However, the time between updates should be short, so you shouldn’t get too far off.
Finally, you might make things a lot easier on yourself by programming in a bit more conservation of momentum. If there’s a limit to how quickly objects can turn, accelerate or decelerate, their paths will not be able to diverge as much from where you predict based on their last positions and velocities.
Depending on your game you might want to prefer smooth player movement over super-precise location. If so, then I'd suggest to aim for 'eventual consistency'. I think your idea of keeping 'real' and 'simulated' data-points is a good one. Just make sure that from time to time you force the simulated to converge with the real, otherwise the gap will get too big.
Regarding your concern about different movement speed I'd suggest you include the current velocity and direction of the player in addition to the current position in your packet. This will enable you to more smoothly predict where the player would be based on your own framerate/update timing.
Essentially you would calculate the current simulated velocity and direction taking into account the last simulated location and velocity as well as last known location and velocity (put more emphasis on the second) and then simulate new position based on that.
If the gap between simulated and known gets too big, just put more emphasis on the known location and the otherPlayer will catch up quicker.

Categories

Resources