Implementing quadtree collision with javascript? - javascript

I am working on an io game similar to agar.io and slither.io (using node.js and socket.io) where there are up to 50 players and around 300 foods in 2d space on the map at a time. Players and food are both circular. Every frame, the server needs to check whether a player has collided with food and act accordingly. Players and foods are both arrays of JSON objects with varying coordinates and sizes. The brute-force method would be looping through all the foods, and for each food, looping through all players to see if they are in collision. Of course, that makes 300*50 iterations, 60 times per second (at 60fps), which is of course way too heavy for the server.
I did come across the quadtree method which is a new concept to me. Also my scarce knowledge on javascript is making me wonder how exactly I might implement it. The problems that I cannot solve are the following:
1. Since players can theoretically be of any size (even as big as the map), then how big would the sections that I divide the map in have to be?
2. Even if I do divide the map into sections, then the only way I can see it working is that for every player, I need to get the foods that share the same sections as the player. This is the big question - now matter how much I think of it, I would still need to loop through every food and check if it's in the required sections. How would I do that without looping? Because that still makes 50*300 iterations, 60 times per second, which does not sound in any way faster to me.
tldr: I need to find a way to detect collisions between a set of 50 objects and a set of 300 objects, 60 times per second. How do I do that without looping through 50*300 iterations at 60 fps?
I could not find any information online that answers my questions. I apologize in advance if I have missed something somewhere that could yield the answers I seek.

This is a small example that only checks a single layer, but I think it demonstrates how you can check for collisions without iterating over all objects.
// 2d array of list of things in said square
// NOT A QUADTREE JUST DEMONSTRATING SOMETHING
let quadlayer = [];
for (let i=0;i<4;++i) {
quadlayer[i] = [];
for (let j=0;j<4;++j) {
quadlayer[i][j] = [];
}
}
function insertObject(ur_object) {
quadlayer[ur_object.x][ur_object.y].push(ur_object);
}
function checkCollision(ur_object) {
let other_objects = quadlayer[ur_object.x][ur_object.y];
console.log('comparing against '+other_objects.length+' instead of '+100);
}
for (let i=0;i<10;++i) {
for (let j=0;j<10;++j) {
insertObject({
x:i%4,
y:j%4
})
}
}
checkCollision({x:1,y:2});

An interesting problem... Here's another take, which essentially uses the sweep line algorithm. (For a good explanation of the sweep line algorithm, see https://www.geeksforgeeks.org/given-a-set-of-line-segments-find-if-any-two-segments-intersect/ ).
The typical performance of a sweep line algorithm is O(n log n) compared to brute force of O(n^2).
In this particular implementation of the sweep line algorithm, all objects (food and people) are kept in a queue, with each object having two entries in the queue. The first entry is x - radius and the second entry is x + radius. That is, the queue tracks the lower/left and upper/right x bounds of all the objects. Furthermore, the queue is sorted by the x bounds using function updatePositionInQueue, which is essentially an insertion sort.
This allows the findCollisions routine to simply walk the queue, maintaining an active set of objects that need to be checked against each other. That is, the objects that overlap in the x dimension will be dynamically added and removed from the active set. Ie, when the queue entry represents a left x bound of an object, the object is added to the active set, and when the queue entry represents an right x bound of an object, the object is removed from the active set. So as the queue of objects is walked, each object that is about to be added to the active set only has to be checked for collisions against the small active set of objects with overlapping x bounds.
Note that as the algorithm stands, it checks for all collisions between people-and-people, people-and-food, and food-and-food...
As a pleasant bonus, the updatePositionInQueue routine permits adjustment of the sorted queue whenever a object moves. That is, if a person moves, the coordinates of their x,y position can be updated on the object, and then updatePositionInQueue( this.qRight ) and updatePositionInQueue( this.qLeft ) can be called, which will look to the previous and next objects in the sorted queue to move the updated object until its x bound is properly sorted. Given that the objects position should not be changing that much between frames, the movement of the left and right x bound entries in the queue should be minimal from frame-to-frame.
The code is as follows, which towards the bottom randomly generates object data, populates the queue, and then runs both the sweep line collision check along with a brute force check to verify the results, in addition to reporting on the performance as measured in object-to-object collision checks.
var queueHead = null;
function QueueEntry(paf, lor, x) {
this.paf = paf;
this.leftOrRight = lor;
this.x = x;
this.prev = null;
this.next = null;
}
function updatePositionInQueue( qEntry ) {
function moveEntry() {
// Remove qEntry from current position in queue.
if ( qEntry.prev === null ) queueHead = qEntry.next;
if ( qEntry.prev ) qEntry.prev.next = qEntry.next;
if ( qEntry.next ) qEntry.next.prev = qEntry.prev;
// Add qEntry to new position in queue.
if ( newLocation === null ) {
qEntry.prev = null;
qEntry.next = queueHead;
queueHead = qEntry;
} else {
qEntry.prev = newLocation;
qEntry.next = newLocation.next;
if ( newLocation.next ) newLocation.next.prev = qEntry;
newLocation.next = qEntry;
}
}
// Walk the queue, moving qEntry into the
// proper spot of the queue based on the x
// value. First check against the 'prev' queue
// entry...
let newLocation = qEntry.prev;
while (newLocation && qEntry.x < newLocation.x ) {
newLocation = newLocation.prev;
}
if (newLocation !== qEntry.prev) {
moveEntry();
}
// ...then against the 'next' queue entry.
newLocation = qEntry;
while (newLocation.next && newLocation.next.x < qEntry.x ) {
newLocation = newLocation.next;
}
if (newLocation !== qEntry) {
moveEntry();
}
}
function findCollisions() {
console.log( `\nfindCollisions():\n\n` );
var performanceCount = 0;
var consoleResult = [];
activeObjects = new Set();
var i = queueHead;
while ( i ) {
if ( i.leftOrRight === true ) {
activeObjects.delete( i.paf );
}
if ( i.leftOrRight === false ) {
let iPaf = i.paf;
for ( let o of activeObjects ) {
if ( (o.x - iPaf.x) ** 2 + (o.y - iPaf.y) ** 2 <= (o.radius + iPaf.radius) ** 2 ) {
if ( iPaf.id < o.id ) {
consoleResult.push( `Collision: ${iPaf.id} with ${o.id}` );
} else {
consoleResult.push( `Collision: ${o.id} with ${iPaf.id}` );
}
}
performanceCount++;
}
activeObjects.add( iPaf );
}
i = i.next;
}
console.log( consoleResult.sort().join( '\n' ) );
console.log( `\nfindCollisions collision check count: ${performanceCount}\n` );
}
function bruteForceCollisionCheck() {
console.log( `\nbruteForceCollisionCheck():\n\n` );
var performanceCount = 0;
var consoleResult = [];
for ( i in paf ) {
for ( j in paf ) {
if ( i < j ) {
let o1 = paf[i];
let o2 = paf[j];
if ( (o1.x - o2.x) ** 2 + (o1.y - o2.y) ** 2 <= (o1.radius + o2.radius) ** 2 ) {
if ( o1.id < o2.id ) {
consoleResult.push( `Collision: ${o1.id} with ${o2.id}` );
} else {
consoleResult.push( `Collision: ${o2.id} with ${o1.id}` );
}
}
performanceCount++;
}
}
}
console.log( consoleResult.sort().join( '\n' ) );
console.log( `\nbruteForceCollisionCheck collision check count: ${performanceCount}\n` );
}
function queuePrint() {
var i = queueHead;
while (i) {
console.log(`${i.paf.id}: x(${i.x}) ${i.paf.type} ${i.leftOrRight ? 'right' : 'left'} (x: ${i.paf.x} y: ${i.paf.y} r:${i.paf.radius})\n`);
i = i.next;
}
}
function PeopleAndFood( id, type, x, y, radius ) {
this.id = id;
this.type = type;
this.x = x;
this.y = y;
this.radius = radius;
this.qLeft = new QueueEntry( this, false, x - radius );
this.qRight = new QueueEntry( this, true, x + radius );
// Simply add the queue entries to the
// head of the queue, and then adjust
// their location in the queue.
if ( queueHead ) queueHead.prev = this.qRight;
this.qRight.next = queueHead;
queueHead = this.qRight;
updatePositionInQueue( this.qRight );
if ( queueHead ) queueHead.prev = this.qLeft;
this.qLeft.next = queueHead;
queueHead = this.qLeft;
updatePositionInQueue( this.qLeft );
}
//
// Test algorithm...
//
var paf = [];
const width = 10000;
const height = 10000;
const foodCount = 300;
const foodSizeMin = 10;
const foodSizeMax = 20;
const peopleCount = 50;
const peopleSizeMin = 50;
const peopleSizeMax = 100;
for (i = 0; i < foodCount; i++) {
paf.push( new PeopleAndFood(
i,
'food',
Math.round( width * Math.random() ),
Math.round( height * Math.random() ),
foodSizeMin + Math.round(( foodSizeMax - foodSizeMin ) * Math.random())
));
}
for (i = 0; i < peopleCount; i++) {
paf.push( new PeopleAndFood(
foodCount + i,
'people',
Math.round( width * Math.random() ),
Math.round( height * Math.random() ),
peopleSizeMin + Math.round(( peopleSizeMax - peopleSizeMin ) * Math.random())
));
}
queuePrint();
findCollisions();
bruteForceCollisionCheck();
(Note that the program prints the queue, followed by the results of findCollisions and bruteForceCollisionCheck. Only the tail end of the console appears to show when running the code snippet.)
Am sure that the algorithm can be squeezed a bit more for performance, but for the parameters in the code above, the test runs are showing a brute force check of 61075 collisions vs ~600 for the sweep line algorithm. Obviously the size of the objects will impact this ratio, as the larger the objects, the larger the set of objects with overlapping x bounds that will need to be cross checked...
An enjoyable problem to solve. Hope this helps.

Related

Attempting to make Ray Tracing inside of p5.js but function recursion is acting weird

So, I found a source online that went over ray tracing for c++ (https://www.scratchapixel.com/code.php?id=3&origin=/lessons/3d-basic-rendering/introduction-to-ray-tracing)
I decided to go into p5.js and attempt to replicate what they have in their source code, but ran into an error when I got to function recursion. To add reflections they used recursion and ran the same function again, but when I attempt the same thing I get all sorts of incorrect outputs... This is my code:
https://editor.p5js.org/20025249/sketches/0LcyoY8yS
function trace(rayorig, raydir, spheres, depth) {
let tnear = INFINITY;
let sphere;
// find intersection of this ray with the spheres in the scene
for (let i = 0; i < spheres.length; i++) {
t0 = INFINITY;
t1 = INFINITY;
if (spheres[i].intersect(rayorig, raydir)) {
if (t0 < 0) t0 = t1;
if (t0 < tnear) {
tnear = t0;
sphere = spheres[i];
}
}
}
// if there's no intersection return black or background color
if (!sphere) return createVector(2, 2, 2);
let surfaceColor = createVector(0); // color of the ray/surfaceof the object intersected by the ray
let phit = createVector(rayorig.x, rayorig.y, rayorig.z).add(createVector(raydir.x, raydir.y, raydir.z).mult(tnear)); // point of intersection
let nhit = createVector(phit.x, phit.y, phit.z).sub(sphere.center); // normal at the intersection point
nhit.normalize(); // normalize normal direction
// If the normal and the view direction are not opposite to each other
// reverse the normal direction. That also means we are inside the sphere so set
// the inside bool to true. Finally reverse the sign of IdotN which we want
// positive.
let bias = 1e-4; // add some bias to the point from which we will be tracing
let inside = false;
if (createVector(raydir.x, raydir.y, raydir.z).dot(nhit) > 0) {
nhit = -nhit;
inside = true;
}
if ((sphere.transparency > 0 || sphere.reflection > 0) && depth < MAX_RAY_DEPTH) {
let facingratio = createVector(-raydir.x, -raydir.y, -raydir.z).dot(nhit);
// change the mix value to tweak the effect
let fresneleffect = mix(pow(1 - facingratio, 3), 1, 0.1);
// compute reflection direction (not need to normalize because all vectors
// are already normalized)
let refldir = createVector(raydir.x, raydir.y, raydir.z).sub(createVector(nhit.x, nhit.y, nhit.z).mult(2).mult(createVector(raydir.x, raydir.y, raydir.z).dot(nhit)));
refldir.normalize();
// Here is the error:
let reflection = trace(createVector(phit.x, phit.y, phit.z).add(createVector(nhit.x, nhit.y, nhit.z).mult(bias)),
refldir,
spheres,
depth+1
);
let refraction = createVector(0);
// // if the sphere is also transparent compute refraction ray (transmission)
// if (sphere.transparency) {
// let ior = 1.1
// let eta = (inside) ? ior : 1 / ior; // are we inside or outside the surface?
// let cosi = createVector(-nhit.x, -nhit.y, -nhit.z).dot(raydir);
// let k = 1 - eta * eta * (1 - cosi * cosi);
// let refrdir = createVector(raydir.x, raydir.y, raydir.z).mult(eta).add(createVector(nhit.x, nhit.y, nhit.z).mult(eta * cosi - sqrt(k)));
// refrdir.normalize();
// refraction = trace(createVector(phit.x, phit.y, phit.z).sub(createVector(nhit.x, nhit.y, nhit.z).mult(bias)),
// refrdir,
// spheres,
// depth + 1
// );
// }
// the result is a mix of reflection and refraction (if the sphere is transparent)
surfaceColor = (
createVector(reflection.x, reflection.y, reflection.z)
.mult(fresneleffect)
.add(
createVector(refraction.x, refraction.y, refraction.z).mult(1 - fresneleffect).mult(sphere.transparency)
)
)
.mult(sphere.surfaceColor);
}
return createVector(surfaceColor.x, surfaceColor.y, surfaceColor.z).add(sphere.emissionColor);
}
The error is that the reflections don't give me the same output as the c++ script and seems to be wonky. I cannot for the love of me figure out why the recursive function just doesn't work.
I have attempted to run it without the recursion and it worked perfectly fine but the recursion is where is breaks
The way I found the error was by printing on the original c++ script and printing on the one I was making and it all works up until the recursive reflections. I get the correct first output but then it all goes down hill.
Their outputs:
[-0.224259 3.89783 -19.1297]
[-0.202411 3.88842 -19.0835]
[-0.180822 3.88236 -19.0538]
My outputs:
[-0.224259 3.89783 -19.1297] // correct
[-0.000065 0.001253 -0.005654] // incorrect
[-0.000064 0.00136 -0.00618] // incorrect
Summary: I made a function that works but the recursion breaks it and I cannot figure out why

Optimize order of objects within layer in Illustrator for reduced laser cutting time

I'm trying to optimize the layer order of paths in Illustrator so that when sent to a laser cutter, the end of one path is close to the start of the next path reducing the travel time of the laser between each cut.
I've come up with the following code, which works, but could be further optimized considering length of lines, or through an annealing process. I'm posting it here in case anyone else is Googling 'Laser cutting optimization' and doesn't want to write their own code. Also if anyone can suggest improvements to the below code, I'd love to hear them.
// For this script to work, all paths to be optimised need to be on layer 0.
// Create a new empty layer in position 1 in the layer heirarchy.
// Run the script, all paths will move from layer 0 to layer 1 in an optimized order.
// Further optimisation possible with 'Annealing', but this will be a good first run optimization.
// Load into Visual Studio Code, follow steps on this website
// https://medium.com/#jtnimoy/illustrator-scripting-in-visual-studio-code-cdcf4b97365d
// to get setup, then run code when linked to Illustrator.
function test() {
if (!app.documents.length) {
alert("You must have a document open.");
return;
}
var docRef = app.activeDocument;
function endToStartDistance(endPath, startPath) {
var endPoint = endPath.pathPoints[endPath.pathPoints.length - 1].anchor;
var startPoint = startPath.pathPoints[0].anchor;
var dx = (endPoint[0] - startPoint[0]);
var dy = (endPoint[1] - startPoint[1]);
var dist = Math.pow((Math.pow(dx, 2) + Math.pow(dy, 2)), 0.5);
return dist;
}
function Optimize(items) {
var lastPath, closest, minDist, delIndex, curItem, tempItems = [];
var topLayer = app.activeDocument.layers[0];
var newLayer = app.activeDocument.layers[1];
for (var x = 1, len = items.length; x < len; x++) {
tempItems.push(items[x]);
}
lastPath = items[0];
lastPath.move(newLayer, ElementPlacement.PLACEATBEGINNING);
while (tempItems.length) {
closest = tempItems[0];
minDist = endToStartDistance(lastPath, closest);
delIndex = 0;
for (var y = 1, len = tempItems.length; y < len; y++) {
curItem = tempItems[y];
if (endToStartDistance(lastPath, curItem) < minDist) {
closest = curItem;
minDist = endToStartDistance(lastPath, closest);
delIndex = y;
}
}
$.writeln(minDist);
//closest.zOrder(ZOrderMethod.BRINGTOFRONT);
closest.move(newLayer, ElementPlacement.PLACEATBEGINNING);
lastPath = closest;
tempItems.splice(delIndex, 1);
}
}
var allPaths = [];
for (var i = 0; i < documents[0].pathItems.length; i++) {
allPaths.push(documents[0].pathItems[i]);
//$.writeln(documents[0].pathItems[i].pathPoints[0].anchor[0])
}
Optimize(allPaths);
}
test();
Version 2 of the above code, some changes include the ability to reverse paths if this results in a reduced distance for the cutting head to move between paths, and added comments to make the code easier to read.
// Create a new empty layer in position 1 in the layer heirarchy.
// Run the script, all paths will move from their current layer to layer 1 in an optimized order.
// Further optimisation possible with 'Annealing', but this will be a good first run optimization.
// Load into Visual Studio Code, follow steps on this website
// https://medium.com/#jtnimoy/illustrator-scripting-in-visual-studio-code-cdcf4b97365d
// to get setup, then run code when linked to Illustrator.aa
function main() {
if (!app.documents.length) {
alert("You must have a document open.");
return;
}
var docRef = app.activeDocument;
// The below function gets the distance between the end of the endPath vector object
// and the start of the startPath vector object.
function endToStartDistance(endPath, startPath) {
var endPoint = endPath.pathPoints[endPath.pathPoints.length - 1].anchor;
var startPoint = startPath.pathPoints[0].anchor;
var dx = (endPoint[0] - startPoint[0]);
var dy = (endPoint[1] - startPoint[1]);
var dist = Math.pow((Math.pow(dx, 2) + Math.pow(dy, 2)), 0.5);
return dist;
}
// The below function gets the distance between the end of the endPath vector object
// and the end of the startPath vector object.
function endToEndDistance(endPath, startPath) {
var endPoint = endPath.pathPoints[endPath.pathPoints.length - 1].anchor;
var startPoint = startPath.pathPoints[startPath.pathPoints.length - 1].anchor;
var dx = (endPoint[0] - startPoint[0]);
var dy = (endPoint[1] - startPoint[1]);
var dist = Math.pow((Math.pow(dx, 2) + Math.pow(dy, 2)), 0.5);
return dist;
}
// The below function iterates over the supplied list of tempItems (path objects) and checks the distance between
// the end of path objects and the start/end of all other path objects, ordering the objects in the layer heirarchy
// so that there is the shortest distance between the end of one path and the start of the next.
// The function can reverse the direciton of a path if this results in a smaller distance to the next object.
function Optimize(tempItems) {
var lastPath, closest, minDist, delIndex, curItem;
var newLayer = app.activeDocument.layers[1]; // There needs to be an empty layer in position 2 in the layer heirarchy
// This is where the path objects are moved as they are sorted.
lastPath = tempItems[0]; // Arbitrarily take the first item in the list of supplied items
tempItems.splice(0, 1); // Remove the first item from the list of items to be iterated over
lastPath.move(newLayer, ElementPlacement.PLACEATBEGINNING); // Move the first item to the first position in the new layer
while (tempItems.length) { // Loop over all supplied items while the length of this array is not 0.
// Items are removed from the list once sorted.
closest = tempItems[0]; // Start by checking the distance to the first item in the list
minDist = Math.min(endToStartDistance(lastPath, closest), endToEndDistance(lastPath, closest));
// Find the smallest of the distances between the end of the previous path item
// and the start / end of this next item.
delIndex = 0; // The delIndex is the index to be removed from the tempItems list after iterating through
// the entire list.
for (var y = 1, len = tempItems.length; y < len; y++) {
// Iterate over all items in the list, starting at item 1 (item 0 already being used above)
curItem = tempItems[y];
if (endToStartDistance(lastPath, curItem) < minDist || endToEndDistance(lastPath, curItem) < minDist) {
// If either the end / start distance to the current item is smaller than the previously
// measured minDistance, then the current path item becomes the new smallest entry
closest = curItem;
minDist = Math.min(endToStartDistance(lastPath, closest), endToEndDistance(lastPath, closest));
// The new minDistace is set
delIndex = y; // And the item is marked for removal from the list at the end of the loop.
}
}
if (endToEndDistance(lastPath, closest) < endToStartDistance(lastPath, closest)) {
reversePaths(closest); // If the smallest distance is yielded from the end of the previous path
// To the end of the next path, reverse the next path so that the
// end-to-start distance between paths is minimised.
}
closest.move(newLayer, ElementPlacement.PLACEATBEGINNING); // Move the closest path item to the beginning of the new layer
lastPath = closest; // The moved path item becomes the next item in the chain, and is stored as the previous item
// (lastPath) for when the loop iterates again.
tempItems.splice(delIndex, 1); // Remove the item identified as closest in the previous loop from the list of
// items to iterate over. When there are no items left in the list
// The loop ends.
}
}
function reversePaths(theItems) { // This code taken / adapted from https://gist.github.com/Grsmto/bfe1541957a0bb17972d
if (theItems.typename == "PathItem" && !theItems.locked && !theItems.parent.locked && !theItems.layer.locked) {
pathLen = theItems.pathPoints.length;
for (k = 0; k < pathLen / 2; k++) {
h = pathLen - k - 1;
HintenAnchor = theItems.pathPoints[h].anchor;
HintenLeft = theItems.pathPoints[h].leftDirection;
HintenType = theItems.pathPoints[h].pointType;
HintenRight = theItems.pathPoints[h].rightDirection;
theItems.pathPoints[h].anchor = theItems.pathPoints[k].anchor;
theItems.pathPoints[h].leftDirection = theItems.pathPoints[k].rightDirection;
theItems.pathPoints[h].pointType = theItems.pathPoints[k].pointType;
theItems.pathPoints[h].rightDirection = theItems.pathPoints[k].leftDirection;
theItems.pathPoints[k].anchor = HintenAnchor;
theItems.pathPoints[k].leftDirection = HintenRight;
theItems.pathPoints[k].pointType = HintenType;
theItems.pathPoints[k].rightDirection = HintenLeft;
}
}
}
var allPaths = []; // Grab every line in the document
for (var i = 0; i < documents[0].pathItems.length; i++) {
allPaths.push(documents[0].pathItems[i]);
// This could be better changed to the selected objects, or to filter only objects below a certain
// stroke weight so that raster paths are not affected, but cut paths are.
}
Optimize(allPaths); // Feed all paths in the document into the optimize function.
}
main(); // Call the main function, executing the above code.

web audio analyser's getFloatTimeDomainData buffer offset wrt buffers at other times and wrt buffer of 'complete file'

(question rewritten integrating bits of information from answers, plus making it more concise.)
I use analyser=audioContext.createAnalyser() in order to process audio data, and I'm trying to understand the details better.
I choose an fftSize, say 2048, then I create an array buffer of 2048 floats with Float32Array, and then, in an animation loop
(called 60 times per second on most machines, via window.requestAnimationFrame), I do
analyser.getFloatTimeDomainData(buffer);
which will fill my buffer with 2048 floating point sample data points.
When the handler is called the next time, 1/60 second has passed. To calculate how much that is in units of samples,
we have to divide it by the duration of 1 sample, and get (1/60)/(1/44100) = 735.
So the next handler call takes place (on average) 735 samples later.
So there is overlap between subsequent buffers, like this:
We know from the spec (search for 'render quantum') that everything happens in "chunck sizes" which are multiples of 128.
So (in terms of audio processing), one would expect that the next handler call will usually be either 5*128 = 640 samples later,
or else 6*128 = 768 samples later - those being the multiples of 128 closest to 735 samples = (1/60) second.
Calling this amount "Δ-samples", how do I find out what it is (during each handler call), 640 or 768 or something else?
Reliably, like this:
Consider the 'old buffer' (from previous handler call). If you delete "Δ-samples" many samples at the beginning, copy the remainder, and then append "Δ-samples" many new samples, that should be the current buffer. And indeed, I tried that,
and that is the case. It turns out "Δ-samples" often is 384, 512, 896. It is trivial but time consuming to determine
"Δ-samples" in a loop.
I would like to compute "Δ-samples" without performing that loop.
One would think the following would work:
(audioContext.currentTime() - (result of audioContext.currentTime() during last time handler ran))/(duration of 1 sample)
I tried that (see code below where I also "stich together" the various buffers, trying to reconstruct the original buffer),
and - surprise - it works about 99.9% of the time in Chrome, and about 95% of the time in Firefox.
I also tried audioContent.getOutputTimestamp().contextTime, which does not work in Chrome, and works 9?% in Firefox.
Is there any way to find "Δ-samples" (without looking at the buffers), which works reliably?
Second question, the "reconstructed" buffer (all the buffers from callbacks stitched together), and the original sound buffer
are not exactly the same, there is some (small, but noticable, more than usual "rounding error") difference, and that is bigger in Firefox.
Where does that come from? - You know, as I understand the spec, those should be the same.
var soundFile = 'https://mathheadinclouds.github.io/audio/sounds/la.mp3';
var audioContext = null;
var isPlaying = false;
var sourceNode = null;
var analyser = null;
var theBuffer = null;
var reconstructedBuffer = null;
var soundRequest = null;
var loopCounter = -1;
var FFT_SIZE = 2048;
var rafID = null;
var buffers = [];
var timesSamples = [];
var timeSampleDiffs = [];
var leadingWaste = 0;
window.addEventListener('load', function() {
soundRequest = new XMLHttpRequest();
soundRequest.open("GET", soundFile, true);
soundRequest.responseType = "arraybuffer";
//soundRequest.onload = function(evt) {}
soundRequest.send();
var btn = document.createElement('button');
btn.textContent = 'go';
btn.addEventListener('click', function(evt) {
goButtonClick(this, evt)
});
document.body.appendChild(btn);
});
function goButtonClick(elt, evt) {
initAudioContext(togglePlayback);
elt.parentElement.removeChild(elt);
}
function initAudioContext(callback) {
audioContext = new AudioContext();
audioContext.decodeAudioData(soundRequest.response, function(buffer) {
theBuffer = buffer;
callback();
});
}
function createAnalyser() {
analyser = audioContext.createAnalyser();
analyser.fftSize = FFT_SIZE;
}
function startWithSourceNode() {
sourceNode.connect(analyser);
analyser.connect(audioContext.destination);
sourceNode.start(0);
isPlaying = true;
sourceNode.addEventListener('ended', function(evt) {
sourceNode = null;
analyser = null;
isPlaying = false;
loopCounter = -1;
window.cancelAnimationFrame(rafID);
console.log('buffer length', theBuffer.length);
console.log('reconstructedBuffer length', reconstructedBuffer.length);
console.log('audio callback called counter', buffers.length);
console.log('root mean square error', Math.sqrt(checkResult() / theBuffer.length));
console.log('lengths of time between requestAnimationFrame callbacks, measured in audio samples:');
console.log(timeSampleDiffs);
console.log(
timeSampleDiffs.filter(function(val) {
return val === 384
}).length,
timeSampleDiffs.filter(function(val) {
return val === 512
}).length,
timeSampleDiffs.filter(function(val) {
return val === 640
}).length,
timeSampleDiffs.filter(function(val) {
return val === 768
}).length,
timeSampleDiffs.filter(function(val) {
return val === 896
}).length,
'*',
timeSampleDiffs.filter(function(val) {
return val > 896
}).length,
timeSampleDiffs.filter(function(val) {
return val < 384
}).length
);
console.log(
timeSampleDiffs.filter(function(val) {
return val === 384
}).length +
timeSampleDiffs.filter(function(val) {
return val === 512
}).length +
timeSampleDiffs.filter(function(val) {
return val === 640
}).length +
timeSampleDiffs.filter(function(val) {
return val === 768
}).length +
timeSampleDiffs.filter(function(val) {
return val === 896
}).length
)
});
myAudioCallback();
}
function togglePlayback() {
sourceNode = audioContext.createBufferSource();
sourceNode.buffer = theBuffer;
createAnalyser();
startWithSourceNode();
}
function myAudioCallback(time) {
++loopCounter;
if (!buffers[loopCounter]) {
buffers[loopCounter] = new Float32Array(FFT_SIZE);
}
var buf = buffers[loopCounter];
analyser.getFloatTimeDomainData(buf);
var now = audioContext.currentTime;
var nowSamp = Math.round(audioContext.sampleRate * now);
timesSamples[loopCounter] = nowSamp;
var j, sampDiff;
if (loopCounter === 0) {
console.log('start sample: ', nowSamp);
reconstructedBuffer = new Float32Array(theBuffer.length + FFT_SIZE + nowSamp);
leadingWaste = nowSamp;
for (j = 0; j < FFT_SIZE; j++) {
reconstructedBuffer[nowSamp + j] = buf[j];
}
} else {
sampDiff = nowSamp - timesSamples[loopCounter - 1];
timeSampleDiffs.push(sampDiff);
var expectedEqual = FFT_SIZE - sampDiff;
for (j = 0; j < expectedEqual; j++) {
if (reconstructedBuffer[nowSamp + j] !== buf[j]) {
console.error('unexpected error', loopCounter, j);
// debugger;
}
}
for (j = expectedEqual; j < FFT_SIZE; j++) {
reconstructedBuffer[nowSamp + j] = buf[j];
}
//console.log(loopCounter, nowSamp, sampDiff);
}
rafID = window.requestAnimationFrame(myAudioCallback);
}
function checkResult() {
var ch0 = theBuffer.getChannelData(0);
var ch1 = theBuffer.getChannelData(1);
var sum = 0;
var idxDelta = leadingWaste + FFT_SIZE;
for (var i = 0; i < theBuffer.length; i++) {
var samp0 = ch0[i];
var samp1 = ch1[i];
var samp = (samp0 + samp1) / 2;
var check = reconstructedBuffer[i + idxDelta];
var diff = samp - check;
var sqDiff = diff * diff;
sum += sqDiff;
}
return sum;
}
In above snippet, I do the following. I load with XMLHttpRequest a 1 second mp3 audio file from my github.io page (I sing 'la' for 1 second). After it has loaded, a button is shown, saying 'go', and after pressing that, the audio is played back by putting it into a bufferSource node and then doing .start on that. the bufferSource is the fed to our analyser, et cetera
related question
I also have the snippet code on my github.io page - makes reading the console easier.
I think the AnalyserNode is not what you want in this situation. You want to grab the data and keep it synchronized with raf. Use a ScriptProcessorNode or AudioWorkletNode to grab the data. Then you'll get all the data as it comes. No problems with overlap, or missing data or anything.
Note also that the clocks for raf and audio may be different and hence things may drift over time. You'll have to compensate for that yourself if you need to.
Unfortunately there is no way to find out the exact point in time at which the data returned by an AnalyserNode was captured. But you might be on the right track with your current approach.
All the values returned by the AnalyserNode are based on the "current-time-domain-data". This is basically the internal buffer of the AnalyserNode at a certain point in time. Since the Web Audio API has a fixed render quantum of 128 samples I would expect this buffer to evolve in steps of 128 samples as well. But currentTime usually evolves in steps of 128 samples already.
Furthermore the AnalyserNode has a smoothingTimeConstant property. It is responsible for "blurring" the returned values. The default value is 0.8. For your use case you probably want to set this to 0.
EDIT: As Raymond Toy pointed out in the comments the smoothingtimeconstant only has an effect on the frequency data. Since the question is about getFloatTimeDomainData() it will have no effect on the returned values.
I hope this helps but I think it would be easier to get all the samples of your audio signal by using an AudioWorklet. It would definitely be more reliable.
I'm not really following your math, so I can't tell exactly what you had wrong, but you seem to look at this in a too complicated manner.
The fftSize doesn't really matter here, what you want to calculate is how many samples have been passed since the last frame.
To calculate this, you just need to
Measure the time elapsed from last frame.
Divide this time by the time of a single frame.
The time of a single frame, is simply 1 / context.sampleRate.
So really all you need is currentTime - previousTime * ( 1 / sampleRate) and you'll find the index in the last frame where the data starts being repeated in the new one.
And only then, if you want the index in the new frame you'd subtract this index from the fftSize.
Now for why you sometimes have gaps, it's because AudioContext.prototype.currentTime returns the timestamp of the beginning of the next block to be passed to the graph.
The one we want here is AudioContext.prototype.getOuputTimestamp().contextTime which represents the timestamp of now, on the same same base as currentTime (i.e the creation of the context).
(function loop(){requestAnimationFrame(loop);})();
(async()=>{
const ctx = new AudioContext();
const buf = await fetch("https://upload.wikimedia.org/wikipedia/en/d/d3/Beach_Boys_-_Good_Vibrations.ogg").then(r=>r.arrayBuffer());
const aud_buf = await ctx.decodeAudioData(buf);
const source = ctx.createBufferSource();
source.buffer = aud_buf;
source.loop = true;
const analyser = ctx.createAnalyser();
const fftSize = analyser.fftSize = 2048;
source.loop = true;
source.connect( analyser );
source.start(0);
// for debugging we use two different buffers
const arr1 = new Float32Array( fftSize );
const arr2 = new Float32Array( fftSize );
const single_sample_dur = (1 / ctx.sampleRate);
console.log( 'single sample duration (ms)', single_sample_dur * 1000);
onclick = e => {
if( ctx.state === "suspended" ) {
ctx.resume();
return console.log( 'starting context, please try again' );
}
console.log( '-------------' );
requestAnimationFrame( () => {
// first frame
const time1 = ctx.getOutputTimestamp().contextTime;
analyser.getFloatTimeDomainData( arr1 );
requestAnimationFrame( () => {
// second frame
const time2 = ctx.getOutputTimestamp().contextTime;
analyser.getFloatTimeDomainData( arr2 );
const elapsed_time = time2 - time1;
console.log( 'elapsed time between two frame (ms)', elapsed_time * 1000 );
const calculated_index = fftSize - Math.round( elapsed_time / single_sample_dur );
console.log( 'calculated index of new data', calculated_index );
// for debugging we can just search for the first index where the data repeats
const real_time = fftSize - arr1.indexOf( arr2[ 0 ] );
console.log( 'real index', real_time > fftSize ? 0 : real_time );
if( calculated_index !== real_time > fftSize ? 0 : real_time ) {
console.error( 'different' );
}
});
});
};
document.body.classList.add('ready');
})().catch( console.error );
body:not(.ready) pre { display: none; }
<pre>click to record two new frames</pre>

A-Star Algorithm: Slow Implementation

I am working on an implementation of the A-Star algorithm in javascript. It works, however it takes a very large amount of time to create a path between two very close together points: (1,1) to (6,6) it takes several seconds. I would like to know what mistakes I have made in the algorithm and how to resolve these.
My code:
Node.prototype.genNeighbours = function() {
var right = new Node(this.x + 1, this.y);
var left = new Node(this.x - 1, this.y);
var top = new Node(this.x, this.y + 1);
var bottom = new Node(this.x, this.y - 1);
this.neighbours = [right, left, top, bottom];
}
AStar.prototype.getSmallestNode = function(openarr) {
var comp = 0;
for(var i = 0; i < openarr.length; i++) {
if(openarr[i].f < openarr[comp].f) comp = i
}
return comp;
}
AStar.prototype.calculateRoute = function(start, dest, arr){
var open = new Array();
var closed = new Array();
start.g = 0;
start.h = this.manhattanDistance(start.x, dest.x, start.y, dest.y);
start.f = start.h;
start.genNeighbours();
open.push(start);
while(open.length > 0) {
var currentNode = null;
this.getSmallestNode(open);
currentNode = open[0];
if(this.equals(currentNode,dest)) return currentNode;
currentNode.genNeighbours();
var iOfCurr = open.indexOf(currentNode);
open.splice(iOfCurr, 1);
closed.push(currentNode);
for(var i = 0; i < currentNode.neighbours.length; i++) {
var neighbour = currentNode.neighbours[i];
if(neighbour == null) continue;
var newG = currentNode.g + 1;
if(newG < neighbour.g) {
var iOfNeigh = open.indexOf(neighbour);
var iiOfNeigh = closed.indexOf(neighbour);
open.splice(iOfNeigh, 1);
closed.splice(iiOfNeigh,1);
}
if(open.indexOf(neighbour) == -1 && closed.indexOf(neighbour) == -1) {
neighbour.g = newG;
neighbour.h = this.manhattanDistance(neighbour.x, dest.x, neighbour.y, dest.y);
neighbour.f = neighbour.g + neighbour.h;
neighbour.parent = currentNode;
open.push(neighbour);
}
}
}
}
Edit: I've now resolved the problem. It was due to the fact that I was just calling: open.sort(); which wasn't sorting the nodes by their 'f' value. I wrote a custom function and now the algorithm runs quickly.
A few mistakes I've spotted:
Your set of open nodes is not structured in any way so that retrieving the one with the minimal distance is easy. The usual choice for this is to use a priority queue, but inserting new nodes in a sorted order (instead of open.push(neighbour)) should suffice (at first).
in your getSmallestNode function, you may start the loop at index 1
you are calling getSmallestNode(), but not using its results at all. You're only taking currentNode = open[0]; every time (and then even searching for its position to splice it! It's 0!). With the queue, it's just currentNode = open.shift().
However, the most important thing (that could have gone most wrong) is your getNeighbors() function. It does create entirely new node objects every time it is called - ones that were unheard of before, and are not know to your algorithm (or its closed set). They may be in the same position in your grid as other nodes, but they're different objects (which are compared by reference, not by similarity). This means that indexOf will never find those new neighbors in the closed array, and they will get processed over and over (and over). I won't attempt to calculate the complexity of this implementation, but I'd guess its even worse than exponential.
Typically, the A* algorithm is executed on already existing graphs. An OOP-getNeighbors-function would return the references to the existing node objects, instead of creating new ones with the same coordinates. If you need to dynamically generate the graph, you'll need a lookup structure (two-dimensional array?) to store and retrieve already-generated nodes.

Garbage collection pauses; Javascript

I've been working on creating a basic 2D tiled game and have been unable to pinpoint the source of noticeable pauses lasting ~100-200ms every second or two, but it seems like GC pauses as when I profiled my app, each game loop is taking around 4ms with a target of 60fps, which means it is running well within the required limit (16ms).
As far as I am aware, I have moved my object variables outside the functions that use them so they never go out of scope and therefore should not be collected, but I am still getting pauses.
Each game loop, the tiles are simply moved 1px to the left (to show smoothness of game frames), and apart from that, all that is called is this draw map function: (NOTE, these functions are defined as part of my engine object at startup so is this true that these functions are not created then collected each time they are called?).
engine.map.draw = function () {
engine.mapDrawMapX = 0;
engine.mapDrawMapY = 0;
// Just draw tiles within screen (and 1 extra on both x and y boundaries)
for (engine.mapDrawJ = -1; engine.mapDrawJ <= engine.screen.tilesY; engine.mapDrawJ++) {
for (engine.mapDrawI = -1; engine.mapDrawI <= engine.screen.tilesX; engine.mapDrawI++) {
//calculate map location (viewport)
engine.mapDrawMapX = engine.mapDrawI + engine.viewport.x;
engine.mapDrawMapY = engine.mapDrawJ + engine.viewport.y;
engine.mapDrawTile = (engine.currentMap[engine.mapDrawMapY] && engine.currentMap[engine.mapDrawMapY][engine.mapDrawMapX]) ? engine.currentMap[engine.mapDrawMapY][engine.mapDrawMapX] : '';
engine.tile.draw(engine.mapDrawI, engine.mapDrawJ, engine.mapDrawTile);
}
}
};
And the method called to draw each tile is:
engine.tile.drawTile = new Image(0,0);
engine.tile.draw = function (x, y, tile) {
if ('' != tile) {
engine.tile.drawTile = engine.tile.retrieve(tile); //this returns an Image() object
engine.context.drawImage(engine.tile.drawTile,
x * TILE_WIDTH + engine.viewport.offsetX,
y * TILE_HEIGHT + engine.viewport.offsetY,
TILE_WIDTH, TILE_HEIGHT);
} else {
engine.context.clearRect(x * TILE_WIDTH, y * TILE_HEIGHT, TILE_WIDTH, TILE_HEIGHT);
}
};
As per request, here are the store and retrieve functions:
engine.tile.store = function (id, img) {
var newID = engine.tile.images.length;
var tile = [id, new Image()];
tile[1] = img;
engine.tile.images[newID] = tile; // store
};
engine.tile.retrieveI;
engine.tile.retrieve = function (id) {
//var len = engine.tile.images.length;
for (engine.tile.retrieveI = 0; engine.tile.retrieveI < engine.tile.images.length; engine.tile.retrieveI++) {
if (engine.tile.images[engine.tile.retrieveI][0] == id) {
return engine.tile.images[engine.tile.retrieveI][1]; // return image
}
}
//return null;
};

Categories

Resources