Generate a non-sinusoidal tone - javascript

Is it possible to generate a tone based on a specific formula? I've tried googling it, but the only things I could find were about normal sine waves, such as this other SO question. So I was wondering if it is possible to generate tones based on other kinds of formulas?
On that other SO question, I did find a link to this demo page, but it seems like that page just downloads sound files and uses them to just alter the pitch of the sounds.
I've already tried combining sine waves by using multiple oscillators, based on this answer, which works just as expected:
window.ctx = new webkitAudioContext();
window.osc = [];
function startTones() {
osc[0] = ctx.createOscillator(),
osc[1] = ctx.createOscillator()
osc[0].frequency.value = 120;
osc[1].frequency.value = 240;
osc[0].connect(ctx.destination);
osc[1].connect(ctx.destination);
osc[0].start(0);
osc[1].start(0);
}
function stopTones() {
osc[0].stop(0);
osc[1].stop(0);
}
<button onclick="startTones();">Start</button>
<button onclick="stopTones();">Stop</button>
So now I was wondering, is it possible to make a wave that's not based on adding sine waves like this, such as a sawtooth wave (x - floor(x)), or a multiplication of sine waves (sin(PI*440*x)*sin(PI*220*x))?
PS: I'm okay with not supporting some browsers - as long as it still works in at least one (although more is better).

All (periodic) waves can be expressed as the addition of sine waves, and WebAudio has a function for synthesising a wave form based on a harmonic series, context.createPeriodicWave(real, imag).
The successive elements of the supplied real and imag input arrays specify the relative amplitude and phase of each harmonic.
Should you want to create a wave procedurally, then in theory you could populate an array with the desired waveform, take the FFT of that, and then pass the resulting FFT components to the above function.
(WebAudio happens to support the sawtooth waveform natively, BTW)

Related

Optimizing smooth tween between svg paths in JavaScript / React Native

I'm currently porting an application to React Native that captures user input as a stroke and animates it to the correct position to match an svg (pictures below). In the web, I use a combination of multiple smoothing libraries & pixijs to achieve perfectly smooth transitions with no artifacts.
With React Native & reanimated I'm limited to functions I can write by hand to handle the interpolation between two paths. Currently what I'm doing is:
Convert the target svg to a fixed number N of points
Smooth the captured input and convert it to a series of N points
Loop over each coordinate and interpolate the value between those two points (linear interpolation)
Run the resulting points array through a Catmull-Rom function
Render the resulting SVG curve
1 & 2 I can cache prior to the animation, but steps 3 4 & 5 need to happen on each render.
Unfortunately, using this method, I'm limited to a value of around 300 N as the maximum amount of points before dropping some frames. I'm also still seeing some artifacts at the end of an animation that I don't know how to fix.
This is sufficient, but given that in the web I can animate tens of thousands of points without dropping frames, I feel like I am missing a key performance optimization here. For example, is there a way to combine steps 3 & 4? Are there more performant algorithms than Catmull-Rom?
Is there a better way to achieve a smooth transition between two vector paths using just pure JavaScript (or dropping into Swift if that is possible)?
Is there something more I can do to remove the artifacts pictured in the last photo? I'm not sure what these are called technically so it's hard for me to research - the catmull-rom spline removed most of them but I still see a few at the tail ends of the animation.
Animation end/start state:
Animation middle state:
Animation start/end state (with artifact):
You might want to have a look at flubber.js
Also why not ditch the catmull-rom for simple linear sections (probably detailed enough with 1000+ points)
If neither helps, or you want to get as fast as possible, you might want to leverage the GPUs power for embarrassingly parallel workflows like the interpolation between to N-sized arrays.
edit:
also consider using the skia renderer which already leverages the gpu and supports stuff perfectly fitting your use-case
import {Canvas, Path, Skia, interpolatePath} from "#shopify/react-native-skia";
//obv. you need to modify this to use your arrays
const path1 = new Path();
path1.moveTo(0, 0);
path1.lineTo(100, 0);
const path2 = new Path();
path2.moveTo(0, 0);
path2.lineTo(0, 100);
//you have to do this dynamically (maybe using skia animations)
let animationProgress = 0.5;
//magic already implemented for you
let path = interpolatePath(animationProgress, [0, 1], [path1, path2]);
const PathDemo = () => {
return (
<Canvas style={{ flex: 1 }}>
<Path
path={path}
color="lightblue"
/>
</Canvas>
);
};

Isolate ultrasounds with Web Audio API

There is any algorithm that I can use with Web Audio Api to isolate ultrasounds?
I've tried 'highpass' filters but I need to isolate sounds that are ONLY ultrasounds (horizontal lines) and ignore noises that are also sounding at lower audible frequencies (vertical lines).
var highpass = audioContext.createBiquadFilter();
highpass.type = 'highpass';
highpass.frequency.value = 17500;
highpass.gain.value = -1
Here's a test with a nice snippet from http://rtoy.github.io/webaudio-hacks/more/filter-design/filter-design.html of how the spectrum of audible noise interferes with filtered ultrasound: (there are 2 canvas, one without the filter and one with the filter https://jsfiddle.net/6gnyhvrk/3
Without filters:
With 17.500 highpass filter:
A highpass filter is what you want, but there are a few things to consider. First, the audio context has to have a high enough sample rate. Second, you have to decide what "ultrasound" means. Many people can hear frequencies above 15 kHz (as in your example). A single highpass filter may not have a sharp enough cutoff for you so you'll need to have a more complicated filter setup.

Multiplayer Game - Client Interpolation Calculation?

I am creating a Multiplayer game using socket io in javascript. The game works perfectly at the moment aside from the client interpolation. Right now, when I get a packet from the server, I simply set the clients position to the position sent by the server. Here is what I have tried to do:
getServerInfo(packet) {
var otherPlayer = players[packet.id]; // GET PLAYER
otherPlayer.setTarget(packet.x, packet.y); // SET TARGET TO MOVE TO
...
}
So I set the players Target position. And then in the Players Update method I simply did this:
var update = function(delta) {
if (x != target.x || y != target.y){
var direction = Math.atan2((target.y - y), (target.x - x));
x += (delta* speed) * Math.cos(direction);
y += (delta* speed) * Math.sin(direction);
var dist = Math.sqrt((x - target.x) * (x - target.x) + (y - target.y)
* (y - target.y));
if (dist < treshhold){
x = target.x;
y = target.y;
}
}
}
This basically moves the player in the direction of the target at a fixed speed. The issue is that the player arrives at the target either before or after the next information arrives from the server.
Edit: I have just read Gabriel Bambettas Article on this subject, and he mentions this:
Say you receive position data at t = 1000. You already had received data at t = 900, so you know where the player was at t = 900 and t = 1000. So, from t = 1000 and t = 1100, you show what the other player did from t = 900 to t = 1000. This way you’re always showing the user actual movement data, except you’re showing it 100 ms “late”.
This again assumed that it is exactly 100ms late. If your ping varies a lot, this will not work.
Would you be able to provide some pseudo code so I can get an Idea of how to do this?
I have found this question online here. But none of the answers provide an example of how to do it, only suggestions.
I'm completely fresh to multiplayer game client/server architecture and algorithms, however in reading this question the first thing that came to mind was implementing second-order (or higher) Kalman filters on the relevant variables for each player.
Specifically, the Kalman prediction steps which are much better than simple dead-reckoning. Also the fact that Kalman prediction and update steps work somewhat as weighted or optimal interpolators. And futhermore, the dynamics of players could be encoded directly rather than playing around with abstracted parameterizations used in other methods.
Meanwhile, a quick search led me to this:
An improvement of dead reckoning algorithm using kalman filter for minimizing network traffic of 3d on-line games
The abstract:
Online 3D games require efficient and fast user interaction support
over network, and the networking support is usually implemented using
network game engine. The network game engine should minimize the
network delay and mitigate the network traffic congestion. To minimize
the network traffic between game users, a client-based prediction
(dead reckoning algorithm) is used. Each game entity uses the
algorithm to estimates its own movement (also other entities'
movement), and when the estimation error is over threshold, the entity
sends the UPDATE (including position, velocity, etc) packet to other
entities. As the estimation accuracy is increased, each entity can
minimize the transmission of the UPDATE packet. To improve the
prediction accuracy of dead reckoning algorithm, we propose the Kalman
filter based dead reckoning approach. To show real demonstration, we
use a popular network game (BZFlag), and improve the game optimized
dead reckoning algorithm using Kalman filter. We improve the
prediction accuracy and reduce the network traffic by 12 percents.
Might seem wordy and like a whole new problem to learn what it's all about... and discrete state-space for that matter.
Briefly, I'd say a Kalman filter is a filter that takes into account uncertainty, which is what you've got here. It normally works on measurement uncertainty at a known sample rate, but it could be re-tooled to work with uncertainty in measurement period/phase.
The idea being that in lieu of a proper measurement, you'd simply update with the kalman predictions. The tactic is similar to target tracking applications.
I was recommended them on stackexchange myself - took about a week to figure out how they were relevant but I've since implemented them successfully in vision processing work.
(...it's making me want to experiment with your problem now !)
As I wanted more direct control over the filter, I copied someone else's roll-your-own implementation of a Kalman filter in matlab into openCV (in C++):
void Marker::kalmanPredict(){
//Prediction for state vector
Xx = A * Xx;
Xy = A * Xy;
//and covariance
Px = A * Px * A.t() + Q;
Py = A * Py * A.t() + Q;
}
void Marker::kalmanUpdate(Point2d& measuredPosition){
//Kalman gain K:
Mat tempINVx = Mat(2, 2, CV_64F);
Mat tempINVy = Mat(2, 2, CV_64F);
tempINVx = C*Px*C.t() + R;
tempINVy = C*Py*C.t() + R;
Kx = Px*C.t() * tempINVx.inv(DECOMP_CHOLESKY);
Ky = Py*C.t() * tempINVy.inv(DECOMP_CHOLESKY);
//Estimate of velocity
//units are pixels.s^-1
Point2d measuredVelocity = Point2d(measuredPosition.x - Xx.at<double>(0), measuredPosition.y - Xy.at<double>(0));
Mat zx = (Mat_<double>(2,1) << measuredPosition.x, measuredVelocity.x);
Mat zy = (Mat_<double>(2,1) << measuredPosition.y, measuredVelocity.y);
//kalman correction based on position measurement and velocity estimate:
Xx = Xx + Kx*(zx - C*Xx);
Xy = Xy + Ky*(zy - C*Xy);
//and covariance again
Px = Px - Kx*C*Px;
Py = Py - Ky*C*Py;
}
I don't expect you to be able to use this directly though, but if anyone comes across it and understand what 'A', 'P', 'Q' and 'C' are in state-space (hint hint, state-space understanding is a pre-req here) they'll likely see how connect the dots.
(both matlab and openCV have their own Kalman filter implementations included by the way...)
This question is being left open with a request for more detail, so I’ll try to fill in the gaps of Patrick Klug’s answer. He suggested, reasonably, that you transmit both the current position and the current velocity at each time point.
Since two position and two velocity measurements give a system of four equations, it enables us to solve for a system of four unknowns, namely a cubic spline (which has four coefficients, a, b, c and d). In order for this spline to be smooth, the first and second derivatives (velocity and acceleration) should be equal at the endpoints. There are two standard, equivalent ways of calculating this: Hermite splines (https://en.wikipedia.org/wiki/Cubic_Hermite_spline) and Bézier splines (http://mathfaculty.fullerton.edu/mathews/n2003/BezierCurveMod.html). For a two-dimensional problem such as this, I suggested separating variables and finding splines for both x and y based on the tangent data in the updates, which is called a clamped piecewise cubic Hermite spline. This has several advantages over the splines in the link above, such as cardinal splines, which do not take advantage of that information. The locations and velocities at the control points will match, you can interpolate up to the last update rather than the one before, and you can apply this method just as easily to polar coordinates if the game world is inherently polar like Space wars. (Another approach sometimes used for periodic data is to perform a FFT and do trigonometric interpolation in the frequency domain, but that doesn’t sound applicable here.)
What originally appeared here was a derivation of the Hermite spline using linear algebra in a somewhat unusual way that (unless I made a mistake entering it) would have worked. However, the comments convinced me it would be more helpful to give the standard names for what I was talking about. If you are interested in the mathematical details of how and why this works, this is a better explanation: https://math.stackexchange.com/questions/62360/natural-cubic-splines-vs-piecewise-hermite-splines
A better algorithm than the one I gave is to represent the sample points and first derivatives as a tridiagonal matrix that, multiplied by a column vector of coefficients, produces the boundary conditions, and solve for the coefficients. An alternative is to add control points to a Bézier curve where the tangent lines at the sampled points intersect and on the tangent lines at the endpoints. Both methods produce the same, unique, smooth cubic spline.
One situation you might be able to avoid if you were choosing the points rather than receiving updates is if you get a bad sample of points. You can’t, for example, intersect parallel tangent lines, or tell what happened if it’s back in the same place with a nonzero first derivative. You’d never choose those points for a piecewise spline, but you might get them if an object made a swerve between updates.
If my computer weren’t broken right now, here is where I would put fancy graphics like the ones I posted to TeX.SX. Unfortunately, I have to bow out of those for now.
Is this better than straight linear interpolation? Definitely: linear interpolation will get you straight- line paths, quadratic splines won't be smooth, and higher-order polynomials will likely be overfitted. Cubic splines are the standard way to solve that problem.
Are they better for extrapolation, where you try to predict where a game object will go? Possibly not: this way, you’re assuming that a player who’s accelerating will keep accelerating, rather than that they will immediately stop accelerating, and that could put you much further off. However, the time between updates should be short, so you shouldn’t get too far off.
Finally, you might make things a lot easier on yourself by programming in a bit more conservation of momentum. If there’s a limit to how quickly objects can turn, accelerate or decelerate, their paths will not be able to diverge as much from where you predict based on their last positions and velocities.
Depending on your game you might want to prefer smooth player movement over super-precise location. If so, then I'd suggest to aim for 'eventual consistency'. I think your idea of keeping 'real' and 'simulated' data-points is a good one. Just make sure that from time to time you force the simulated to converge with the real, otherwise the gap will get too big.
Regarding your concern about different movement speed I'd suggest you include the current velocity and direction of the player in addition to the current position in your packet. This will enable you to more smoothly predict where the player would be based on your own framerate/update timing.
Essentially you would calculate the current simulated velocity and direction taking into account the last simulated location and velocity as well as last known location and velocity (put more emphasis on the second) and then simulate new position based on that.
If the gap between simulated and known gets too big, just put more emphasis on the known location and the otherPlayer will catch up quicker.

Dynamically Animating Hue Shift on Canvas Image

I'm still relatively new to working with the canvas tag. What I've done so far is draw an image to the canvas. My goal is to have a fake night/day animation that cycles repeatedly.
I've exhausted quite a few different avenues (SVG, CSS3 filters, etc) and think that canvas pixel manipulation is the best route in my case. I'm trying to:
Loop through all pixels in the image
Select a certain color range
Adjust to new color
Update the canvas
Here's the code I have so far:
function gameLoop(){
requestAnimationFrame(gameLoop);
////////////////////////////////////////////////////////////////
// LOOP PIXEL DATA - PIXEL'S RGBA IS STORED IN SEQUENTIAL ARRAYS
////////////////////////////////////////////////////////////////
for(var i=0; i<data.length; i+=4){
red=data[i+0];
green=data[i+1];
blue=data[i+2];
alpha=data[i+3];
// GET HUE BY CONVERTING TO HSL
var hsl=rgbToHsl(red, green, blue);
var hue=hsl.h*360;
// CHANGE SET COLORRANGE TO NEW COLORSHIFT
if(hue>colorRangeStart && hue<colorRangeEnd){
var newRgb=hslToRgb(hsl.h+colorShift, hsl.s, hsl.l);
data[i+0]=newRgb.r;
data[i+1]=newRgb.g;
data[i+2]=newRgb.b;
data[i+3]=255;
};
};
// UPDATE CANVAS
ctx.putImageData(imgData, 0, 0);
};
The code works and selects a hue ranges and shifts it once, but is incredibly laggy. The canvas dimensions are roughly 500x1024.
My questions:
Is it possible to improve performance?
Is there a better way to perform a defined hue shift animation?
Thanks!
It's hard to do this real-time using high quality HSL conversion. Been there done that, so I came up with a quantized approach which allow you to do this in real-time.
You can find the solution here (GPL3.0 licensed):
https://github.com/epistemex/FastHSL2RGB
Example of usage can be found here (MIT license) incl. demo:
https://github.com/epistemex/HueWheel
Apologies for referencing my own solutions here, but the inner workings (the how to's) is too extensive to present in a simple form here and both of these are free to use for anything..
The key points are in any case:
Quantize the range you want to use (don't use full 360 degrees and not floating points for lightness etc.)
Cache the values in a 3D array (initial setup using web workers or use rough values)
Quantize the input values so they fit in the range of the inner 3D array
Process the bitmap using these values
It is not accurate but good enough for animations (or previews which is what I wrote it for).
There are other techniques such as pre-caching the complete processed bitmap for key positions, then interpolate the colors between those instead. This, of course, requires much more memory but is a fast way.
Hope this helps!

Multi-tempo/meter js DAW

Has anyone implemented a javascript audio DAW with multiple tempo and meter change capabilities like most of the desktop daws (pro tools, sonar, and the like)? As far as I can tell, claw, openDAW, and web audio editor don't do this. Drawing a grid meter, converting between samples and MBT time, and rendering waveforms is easy when the tempo and meter do not change during the project, but when they do it gets quite a bit more complicated. I'm looking for any information on how to accomplish something like this. I'm aware that the source for Audacity is available, but I'd love to not have to dig through an enormous pile of code in a language I'm not an expert in to figure this out.
web-based DAW solutions exists.web-based DAW's are seen as SaaS(Software as a Service) applications.
They are lightweight and contain basic fundamental DAW features.
For designing rich client applications(RCA) you should take a look at GWT and Vaadin.
I recommend GWT because it is mature and has reusable components and its also AJAX driven.
Also here at musicradar site they have listed nine different browser based audio workstations.you can also refer to popcorn maker which is entirely javascript code.You can get some inspiration from there to get started.
You're missing the last step, which will make it easier.
All measures are relative to fractions of minutes, based on the time-signature and tempo.
The math gets a little more complex, now that you can't just plot 4/4 or 6/8 across the board and be done with it, but what you're looking at is running an actual time-line (whether drawn onscreen or not), and then figuring out where each measure starts and ends, based on either the running sum of a track's current length (in minutes/seconds), or based on the left-most take's x-coordinate (starting point) + duration...
or based on the running total of each measure's length in seconds, up to the current beat you care about.
var measure = { beats : 4, denomination : 4, tempo : 80 };
Given those three data-points, you should be able to say:
var measure_length = SECONDS_PER_MINUTE / measure.tempo * measure.beats;
Of course, that's currently in seconds. To get it in ms, you'd just use MS_PER_MINUTE, or whichever other ratio of minutes you'd want to measure by.
current_position + measure_length === start_of_next_measure;
You've now separated out each dimension required to allow you to calculate each measure on the fly.
Positioning each measure on the track, to match up with where it belongs on the timeline is as simple as keeping a running tally of where X is (the left edge of the measure) in ms (really in screen-space and project-coordinates, but ms can work fine for now).
var current_position = 0,
current_tempo = 120,
current_beats = 4,
current_denomination = 4,
measures = [ ];
measures.forEach(function (measure) {
if (measure.tempo !== current_tempo) {
/* draw tempo-change, set current_tempo */
/* draw time-signature */
}
if (measure.beats !== current_beats ||
measure.denomination !== current_denomination) {
/* set changes, draw time-signature */
}
draw_measure(measure, current_position);
current_position = MS_PER_MINUTE / measure.beats * measure.tempo;
});
Drawing samples just requires figuring out where you're starting from, and then sticking to some resolution (MS/MS*4/Seconds).
The added benefit of separating out the calculation of the time is that you can change the resolution of your rendering on the fly, by changing which time-scale you're comparing against (ms/sec/min/etc), so long as you re-render the whole thing, after scaling.
The rabbit hole goes deeper (for instance, actual audio tracks don't really care about measures/beats, though quantization-processes do), so to write a non-destructive, non-linear DAW, you can just set start-time and duration properties on views into your audio-buffer (or views into view-buffers of your audio buffer).
Those views would be the non-destructive windows that you can resize and drag around your track.
Then there's just the logic of figuring out snaps -- what your screen-space is, versus project-space, and when you click on a track's clip, which measure, et cetera, you're in, to do audio-snapping on resize/move.
Of course, to do a 1:1 recreation of ProTools in JS in the browser would not fly (gigs of RAM for one browser tab won't do, media capture API is still insufficient for multi-tracking, disk-writes are much, much more difficult in browser than in C++, in your OS of choice, et cetera), but this should at least give you enough to run with.
Let me know if I'm missing something.

Categories

Resources