JS Canvas get pixel value very frequently - javascript

I am creating a video game based on Node.js/WebGL/Canvas/PIXI.js.
In this game, blocks have a generic size: they can be circles, polygons, or everything. So, my physical engine needs to know where exactly the things are, what pixels are walls and what pixels are not. Since I think PIXI don't allow this, I create an invisible canvas where I put all the wall's images of the map. Then, I use the function getImageData to create a function "isWall" at (x, y):
function isWall(x, y):
return canvas.getImageData(x, y, 1, 1).data[3] != 0;
However, this is very slow (it takes up to 70% of the CPU time of the game, according to Chrome profiling). Also, since I introduced this function, I sometimes got the error "Oops, WebGL crashed" without any additional advice.
Is there a better method to access the value of the pixel? I thought about storing everything in a static bit array (walls have a fixed size), with 1 corresponding to a wall and 0 to a non-wall. Is it reasonable to have a 10-million-cells array in memory?

Some thoughts:
For first check: Use collision regions for all of your objects. The regions can even be defined for each side depending on shape (ie. complex shapes). Only check for collisions inside intersecting regions.
Use half resolution for hit-test bitmaps (or even 25% if your scenario allow). Our brains are not capable of detecting pixel-accurate collisions when things are moving so this can be taken advantage of.
For complex shapes, pre-store the whole bitmap for it (based on its region(s)) but transform it to a single value typed array like Uint8Array with high and low values (re-use this instead of getting one and one pixels via the context). Subtract object's position and use the result as a delta for your shape region, then hit-testing the "bitmap". If the shape rotates, transform incoming check points accordingly (there is probably a sweet-spot here where updating bitmap becomes faster than transforming a bunch of points etc. You need to test for your scenario).
For close-to-square shaped objects do a compromise and use a simple rectangle check
For circles and ellipses use un-squared values to check distances for radius.
In some cases you can perhaps use collision predictions which you calculate before the games starts and when knowing all objects positions, directions and velocities (calculate the complete motion path, find intersections for those paths, calculate time/distance to those intersections). If your objects change direction etc. due to other events during their path, this will of course not work so well (or try and see if re-calculating is beneficial or not).
I'm sure why you would need 10m stored in memory, it's doable though - but you will need to use something like a quad-tree and split the array up, so it becomes efficient to look up a pixel state. IMO you will only need to store "bits" for the complex shapes, and you can limit it further by defining multiple regions per shape. For simpler shapes just use vectors (rectangles, radius/distance). Do performance tests often to find the right balance.
In any case - these sort of things has to be hand-optimized for the very scenario, so this is just a general take on it. Other factors will affect the approach such as high velocities, rotation, reflection etc. and it will quickly become very broad. Hope this gives some input though.

I use bit arrays to store 0 || 1 info and it works very well.
The information is stored compactly and gets/sets are very fast.
Here is the bit library I use:
https://github.com/drslump/Bits-js/blob/master/lib/Bits.js
I've not tried with 10m bits so you'll have to try it on your own dataset.
The solution you propose is very "flat", meaning each pixel must have a corresponding bit. This results in a large amount of memory being required--even if information is stored as bits.
An alternative testing data ranges instead of testing each pixel:
If the number of wall pixels is small versus the total number of pixels you might try storing each wall as a series of "runs". For example, a wall run might be stored in an object like this (warning: untested code!):
// an object containing all horizontal wall runs
var xRuns={}
// an object containing all vertical wall runs
var yRuns={}
// define a wall that runs on y=50 from x=100 to x=185
// and then runs on x=185 from y=50 to y=225
var y=50;
var x=185;
if(!xRuns[y]){ xRuns[y]=[]; }
xRuns[y].push({start:100,end:185});
if(!yRuns[x]){ yRuns[x]=[]; }
yRuns[x].push({start:50,end:225});
Then you can quickly test an [x,y] against the wall runs like this (warning untested code!):
function isWall(x,y){
if(xRuns[y]){
var a=xRuns[y];
var i=a.length;
do while(i--){
var run=a[i];
if(x>=run.start && x<=run.end){return(true);}
}
}
if(yRuns[x]){
var a=yRuns[x];
var i=a.length;
do while(i--){
var run=a[i];
if(y>=run.start && y<=run.end){return(true);}
}
}
return(false);
}
This should require very few tests because the x & y exactly specify which array of xRuns and yRuns need to be tested.
It may (or may not) be faster than testing the "flat" model because there is overhead getting to the specified element of the flat model. You'd have to perf test using both methods.
The wall-run method would likely require much less memory.
Hope this helps...Keep in mind the wall-run alternative is just off the top of my head and probably requires tweaking ;-)

Related

Higher precision in JavaScript

I am trying to calculate with higher precision numbers in JavaScript to be able to zoom in more on the Mandlebrot set.
(after a certain amount of zooming the results get "pixelated", because of the low precision)
I have looked at this question, so I tried using a library such as BigNumber but it was unusably slow.
I have been trying to figure this out for a while and I think the only way is to use a slow library.
Is there a faster library?
Is there any other way to calculate with higher precision numbers?
Is there any other way to be able to zoom in more on the Mandlebrot set?
Probably unneceseary to add this code, but this is the function I use to check if a point is in the Mandlebrot set.
function mandelbrot(x, y, it) {
var z = [0, 0]
var c1 = [x, y]
for (var i = 0; i < it; i++) {
z = [z[0]*z[0] - z[1]*z[1] + c1[0], 2*z[0]*z[1] + c1[1]]
if (Math.abs(z[0]) > 2, Math.abs(z[1]) > 2) {
break
}
}
return i
}
The key is not so much the raw numeric precision of JavaScript numbers (though that of course has its effects), but the way the basic Mandelbrot "escape" test works, specifically the threshold iteration counts. To compute whether a point in the complex plane is in or out of the set, you iterate on the formula (which I don't exactly remember and don't feel like looking up) for the point over and over again until the point obviously diverges (the formula "escapes" from the origin of the complex plane by a lot) or doesn't before the iteration threshold is reached.
The iteration threshold when rendering a view of the set that covers most of it around the origin of the complex plane (about 2 units in all directions from the origin) can be as low as 500 to get a pretty good rendering of the whole set at a reasonable magnification on a modern computer. As you zoom in, however, the iteration threshold needs to increase in inverse proportion to the size of the "window" onto the complex plane. If it doesn't, then the "escape" test doesn't work with sufficient accuracy to delineate fine details at higher magnifications.
The formula I used in my JavaScript implementation is
maxIterations = 400 * Math.log(1/dz0)
where dz0 is (arbitrarily) the width of the window onto the plane. As one zooms into a view of the set (well, the "edge" of the set, where things are interesting), dz0 gets pretty small so the iteration threshold gets up into the thousands.
The iteration count, of course, for points that do "escape" (that is, points that are not part of the Mandelbrot set) can be used as a sort of "distance" measurement. A point that escapes within a few iterations is clearly not "close to" the set, while a point that escapes only after 2000 iterations is much closer. That distance quality can be used in various ways in visualizations, either to provide a color value (common) or possibly a z-axis value if the set is being rendered as a 3D view (with the set as a sort of "mesa" in three dimensions and the borders being a vertical "cliff" off the sides).

3D Grid for multiple shapes

A few months ago I made a small terrain generator, like Minecraft, for a school project.
The way I did this was by using multiple chunks. Each chunk contained a 3-dimensional array that stored the blocks.
Every position in this array corresponded with the position of the block it contained.
blocks[x, y, z] = new Block();
Now I would like to add different sizes if blocks. However, I can't do that with the way I am storing the blocks right now, because bigger blocks would have to be spread over multiple positions in the 3-dimensional array.
An example of a game with different sizes of blocks (and different shapes) is LEGO Worlds. How does a game like this store all these little blocks?
I hope someone can help me with this.
The language I am using is Javascript in combination with WebGL.
Thanks in advance!
In my experience there are a few different ways of tackling an issue like this, but the one I'd recommend would depend on the amount of time you have to work on this and the scope (how big) you wanted to make this game.
Your Current Approach
At the moment I think your using what most people would consider the most straightforward approach by storing the voxels in a 3D grid
[Source].
But two problems you seem to be having is that there isn't an obvious way to create blocks that are bigger then 1x1 and that a 3D grid for a world space is fairly inefficient in terms of memory usage (As for an array you have to have memory allocated for every cell, including empty space. JavaScript is no different).
An Alternative Approach
An alternative to using a 3D array would be to instead use a different data structure, the full name being a sparse voxel octree.
This to put it simply is a tree data structure that works by subdividing an area of space until everything has been stored.
The 2D form of this where a square sub divides into four smaller quadrants is called a quad tree and likewise a 3D equivalent divides into eight quadrants, called an octree. This approach is generally preferable when possible as its much more efficient because the trees only occupy more memory when its absolutely essential and they can also be packed into a 1D array (Technically a 3D array can be too).
A common tactic used with quad/octrees in some block based games is to take a region of the same kind of voxel that fit into one larger quadrant of the tree is to simply stop sub division there, as there's no reason to go deeper if all the data is the same.
The other optimization they can make is called sparse where regions of empty space (air) are simply deleted since empty space doesn't do anything special and its location can be inferred.
[SVO Source]
[Z Order Curve Source]
Recommended Approach
Unless you have a few months to complete your game and you're at university I seriously wouldn't recommend an SVO (Though reading up about could impress any teachers you have). Instead I'd recommend taking the same approach that Minecraft appears to visibly has. E.G. A door is 1X2 but blocks can only be 1x1, then just make it two blocks.
In the example of a door you would have four unique blocks in total, two for the upper and lower half, and two variations of each being opened or closed.
E.G.
var cubeProgram; // shader program
var cubeVBO; // vertex buffer (I recommend combining vertex & UV coords)
var gl; // rendering context
// Preset list of block ID's
var BLOCK_TYPES = {
DOOR_LOWER_OPEN: 0,
DOOR_UPPER_OPEN: 1,
DOOR_LOWER_CLOSED: 2,
DOOR_UPPER_CLOSED: 3,
}
var BLOCK_MESHES = {
GENERIC_VBO: null,
DOOR_UPPER_VBO: null
DOOR_LOWER_VBO: null
}
// Declare a Door class using ES6 syntax
class Door {
// Assume X & Y are the lower half of the door
constructor(x,y,map) {
if (y - 1 > -1) {
console.error("Error: Top half of the door goes outside the map");
return;
}
this.x = x;
this.y = y;
map[x][y ] = BLOCK_TYPES.DOOR_LOWER_OPEN;
map[x][y-1] = BLOCK_TYPES.DOOR_UPPER_OPEN;
}
}

Simulate an infinite number of objects

On this example we can move inside a field of spheres but into certain limits. I want to be able to move infinitely among them. How can I do that ?
The trick is to reuse the spheres that are behind the camera and put them in front of it. Look at how it is done in this example. Here the programmer knows that the user will continue in the same direction so he removes the trees that come at a certain position.
If you use something like the example you quoted, you cannot know which direction the user will take. And so, you can use the same trick, but have to code it an other way. The most obvious is to check the distances with all the spheres regularly, if the user moves. If one sphere is too far behind the camera, you mirror it so it faces the camera, behind the fog.
'Regularly' can mean two things depending on your real number of spheres in your scene :
If you have a small scene and few spheres you can check those distances in your render loop. Neither cheap nor useful, 60 per seconds, but that can be the first coding step
Then the best way would be to use a web worker : you send the positions of the camera and those of the spheres, you let the worker compute all the stuff in its thread, and send instructions back : 'move those spheres to those positions'. Every seconds is more reasonable in the threejs example, but up to you to decide that depending on your scene.
NOTE : if you have a lot of spheres, or any meshes you use instead, like more than 20-30, having a mesh for each of them will slower performances. With few trees on the examples i linked it is ok, but with more objects and/or a heavier scene,
think about merging them all in a single geometry. You can check which sphere is where by deducing from the vertices indices, or adding an attribute that defines each sphere.
this will also impact the worker delay : it will have more to compute so it will need more time.
NOTE 2 : Note 1 would of course delete the level of details that the example aims to illustrate :) (Unless you also implement your own while checking the distances of the spheres....)
If you want to have an illusion of infinite world then you could:
Break your world space into regions (for example cubes).
Detect which region you are currently in.
Make sure you have objects (spheres) in neighbour regions. If some of regions are empty - fix it.
Clear regions which are not needed anymore.
For this you might want to have some class like this:
Class Region {
bool isEmpty = true;
Vector3 center;
float radius; // or 'range'
Array<Sphere> = null; // storage of your objects
// constructors / destructor
generateObjects(params); // perlin noise might be helpful there
removeObjects();
}
and do something like this periodically:
void updateRegions() {
computeClosestGridCoord(myPosition); // which is center of your current region
lookForNeighbourRegions(regionsArray); // and add new Region if needed
deleteOldRegionsStuff(regionsArray);
}

Multiplayer Game - Client Interpolation Calculation?

I am creating a Multiplayer game using socket io in javascript. The game works perfectly at the moment aside from the client interpolation. Right now, when I get a packet from the server, I simply set the clients position to the position sent by the server. Here is what I have tried to do:
getServerInfo(packet) {
var otherPlayer = players[packet.id]; // GET PLAYER
otherPlayer.setTarget(packet.x, packet.y); // SET TARGET TO MOVE TO
...
}
So I set the players Target position. And then in the Players Update method I simply did this:
var update = function(delta) {
if (x != target.x || y != target.y){
var direction = Math.atan2((target.y - y), (target.x - x));
x += (delta* speed) * Math.cos(direction);
y += (delta* speed) * Math.sin(direction);
var dist = Math.sqrt((x - target.x) * (x - target.x) + (y - target.y)
* (y - target.y));
if (dist < treshhold){
x = target.x;
y = target.y;
}
}
}
This basically moves the player in the direction of the target at a fixed speed. The issue is that the player arrives at the target either before or after the next information arrives from the server.
Edit: I have just read Gabriel Bambettas Article on this subject, and he mentions this:
Say you receive position data at t = 1000. You already had received data at t = 900, so you know where the player was at t = 900 and t = 1000. So, from t = 1000 and t = 1100, you show what the other player did from t = 900 to t = 1000. This way you’re always showing the user actual movement data, except you’re showing it 100 ms “late”.
This again assumed that it is exactly 100ms late. If your ping varies a lot, this will not work.
Would you be able to provide some pseudo code so I can get an Idea of how to do this?
I have found this question online here. But none of the answers provide an example of how to do it, only suggestions.
I'm completely fresh to multiplayer game client/server architecture and algorithms, however in reading this question the first thing that came to mind was implementing second-order (or higher) Kalman filters on the relevant variables for each player.
Specifically, the Kalman prediction steps which are much better than simple dead-reckoning. Also the fact that Kalman prediction and update steps work somewhat as weighted or optimal interpolators. And futhermore, the dynamics of players could be encoded directly rather than playing around with abstracted parameterizations used in other methods.
Meanwhile, a quick search led me to this:
An improvement of dead reckoning algorithm using kalman filter for minimizing network traffic of 3d on-line games
The abstract:
Online 3D games require efficient and fast user interaction support
over network, and the networking support is usually implemented using
network game engine. The network game engine should minimize the
network delay and mitigate the network traffic congestion. To minimize
the network traffic between game users, a client-based prediction
(dead reckoning algorithm) is used. Each game entity uses the
algorithm to estimates its own movement (also other entities'
movement), and when the estimation error is over threshold, the entity
sends the UPDATE (including position, velocity, etc) packet to other
entities. As the estimation accuracy is increased, each entity can
minimize the transmission of the UPDATE packet. To improve the
prediction accuracy of dead reckoning algorithm, we propose the Kalman
filter based dead reckoning approach. To show real demonstration, we
use a popular network game (BZFlag), and improve the game optimized
dead reckoning algorithm using Kalman filter. We improve the
prediction accuracy and reduce the network traffic by 12 percents.
Might seem wordy and like a whole new problem to learn what it's all about... and discrete state-space for that matter.
Briefly, I'd say a Kalman filter is a filter that takes into account uncertainty, which is what you've got here. It normally works on measurement uncertainty at a known sample rate, but it could be re-tooled to work with uncertainty in measurement period/phase.
The idea being that in lieu of a proper measurement, you'd simply update with the kalman predictions. The tactic is similar to target tracking applications.
I was recommended them on stackexchange myself - took about a week to figure out how they were relevant but I've since implemented them successfully in vision processing work.
(...it's making me want to experiment with your problem now !)
As I wanted more direct control over the filter, I copied someone else's roll-your-own implementation of a Kalman filter in matlab into openCV (in C++):
void Marker::kalmanPredict(){
//Prediction for state vector
Xx = A * Xx;
Xy = A * Xy;
//and covariance
Px = A * Px * A.t() + Q;
Py = A * Py * A.t() + Q;
}
void Marker::kalmanUpdate(Point2d& measuredPosition){
//Kalman gain K:
Mat tempINVx = Mat(2, 2, CV_64F);
Mat tempINVy = Mat(2, 2, CV_64F);
tempINVx = C*Px*C.t() + R;
tempINVy = C*Py*C.t() + R;
Kx = Px*C.t() * tempINVx.inv(DECOMP_CHOLESKY);
Ky = Py*C.t() * tempINVy.inv(DECOMP_CHOLESKY);
//Estimate of velocity
//units are pixels.s^-1
Point2d measuredVelocity = Point2d(measuredPosition.x - Xx.at<double>(0), measuredPosition.y - Xy.at<double>(0));
Mat zx = (Mat_<double>(2,1) << measuredPosition.x, measuredVelocity.x);
Mat zy = (Mat_<double>(2,1) << measuredPosition.y, measuredVelocity.y);
//kalman correction based on position measurement and velocity estimate:
Xx = Xx + Kx*(zx - C*Xx);
Xy = Xy + Ky*(zy - C*Xy);
//and covariance again
Px = Px - Kx*C*Px;
Py = Py - Ky*C*Py;
}
I don't expect you to be able to use this directly though, but if anyone comes across it and understand what 'A', 'P', 'Q' and 'C' are in state-space (hint hint, state-space understanding is a pre-req here) they'll likely see how connect the dots.
(both matlab and openCV have their own Kalman filter implementations included by the way...)
This question is being left open with a request for more detail, so I’ll try to fill in the gaps of Patrick Klug’s answer. He suggested, reasonably, that you transmit both the current position and the current velocity at each time point.
Since two position and two velocity measurements give a system of four equations, it enables us to solve for a system of four unknowns, namely a cubic spline (which has four coefficients, a, b, c and d). In order for this spline to be smooth, the first and second derivatives (velocity and acceleration) should be equal at the endpoints. There are two standard, equivalent ways of calculating this: Hermite splines (https://en.wikipedia.org/wiki/Cubic_Hermite_spline) and Bézier splines (http://mathfaculty.fullerton.edu/mathews/n2003/BezierCurveMod.html). For a two-dimensional problem such as this, I suggested separating variables and finding splines for both x and y based on the tangent data in the updates, which is called a clamped piecewise cubic Hermite spline. This has several advantages over the splines in the link above, such as cardinal splines, which do not take advantage of that information. The locations and velocities at the control points will match, you can interpolate up to the last update rather than the one before, and you can apply this method just as easily to polar coordinates if the game world is inherently polar like Space wars. (Another approach sometimes used for periodic data is to perform a FFT and do trigonometric interpolation in the frequency domain, but that doesn’t sound applicable here.)
What originally appeared here was a derivation of the Hermite spline using linear algebra in a somewhat unusual way that (unless I made a mistake entering it) would have worked. However, the comments convinced me it would be more helpful to give the standard names for what I was talking about. If you are interested in the mathematical details of how and why this works, this is a better explanation: https://math.stackexchange.com/questions/62360/natural-cubic-splines-vs-piecewise-hermite-splines
A better algorithm than the one I gave is to represent the sample points and first derivatives as a tridiagonal matrix that, multiplied by a column vector of coefficients, produces the boundary conditions, and solve for the coefficients. An alternative is to add control points to a Bézier curve where the tangent lines at the sampled points intersect and on the tangent lines at the endpoints. Both methods produce the same, unique, smooth cubic spline.
One situation you might be able to avoid if you were choosing the points rather than receiving updates is if you get a bad sample of points. You can’t, for example, intersect parallel tangent lines, or tell what happened if it’s back in the same place with a nonzero first derivative. You’d never choose those points for a piecewise spline, but you might get them if an object made a swerve between updates.
If my computer weren’t broken right now, here is where I would put fancy graphics like the ones I posted to TeX.SX. Unfortunately, I have to bow out of those for now.
Is this better than straight linear interpolation? Definitely: linear interpolation will get you straight- line paths, quadratic splines won't be smooth, and higher-order polynomials will likely be overfitted. Cubic splines are the standard way to solve that problem.
Are they better for extrapolation, where you try to predict where a game object will go? Possibly not: this way, you’re assuming that a player who’s accelerating will keep accelerating, rather than that they will immediately stop accelerating, and that could put you much further off. However, the time between updates should be short, so you shouldn’t get too far off.
Finally, you might make things a lot easier on yourself by programming in a bit more conservation of momentum. If there’s a limit to how quickly objects can turn, accelerate or decelerate, their paths will not be able to diverge as much from where you predict based on their last positions and velocities.
Depending on your game you might want to prefer smooth player movement over super-precise location. If so, then I'd suggest to aim for 'eventual consistency'. I think your idea of keeping 'real' and 'simulated' data-points is a good one. Just make sure that from time to time you force the simulated to converge with the real, otherwise the gap will get too big.
Regarding your concern about different movement speed I'd suggest you include the current velocity and direction of the player in addition to the current position in your packet. This will enable you to more smoothly predict where the player would be based on your own framerate/update timing.
Essentially you would calculate the current simulated velocity and direction taking into account the last simulated location and velocity as well as last known location and velocity (put more emphasis on the second) and then simulate new position based on that.
If the gap between simulated and known gets too big, just put more emphasis on the known location and the otherPlayer will catch up quicker.

Organizational system for moving tiles in grid-based level

conceptual problem here.
I have an array which will be rendered to display tiles in a grid. Now, I want these tiles to be able to move - but not just around in the grid. Per-pixel. It does need to be a grid, because I need to shift whole rows of tiles, and be able to access tiles by their position, but it also needs to have per-pixel adjustment, while still keeping the "grid" up to date. Picture a platforming game with moving tiles.
There are a few organizational systems with which I could do this, and I'll outline a few I thought of as well as their pros and cons (XY-style) in case it helps you understand what I'm saying. I'm asking if you think one of these is best, or think of a better way.
One way would be to place objects in the array with the properties xOffset and yOffset. I would then render them in their tile position plus their offset. (x * tileWidth + tile.xOffset). Pros: maintains vanilla grid-system. Cons: Then I would have to adjust each tile to its actual grid location once it moved. Also, the "grid" position would become a bit confused as tiles are moving. (Side note: If you think this is a good way, how would I handle collisions? It wouldn't be as simple as player.x / tileWidth anymore.)
Another would be to place lots of objects with xs and ys and render them all. Pros: Simple. Cons: Then I would have to check each one to see if it's in the row I want to shift before doing so. Also, collisions could not simply check the one tile a player is on, they would have to check all entities.
Another I thought of would be a sort of combination of the two. Tiles would be in the original array and get render as x * tileWidth normal tiles. Then, when they move, they are deleted from the grid and placed in a separate array for moving tiles, where their x and y are stored. Then the collisions would check the grid the fast way and the moving tiles the slow way.
Thanks!
PS: I'm using JavaScript, but it shouldn't be relevant.
PPS: Forgive me if it's not Stack Overflow material. This was the best fit, I thought. It's not exactly code review, but it's not specific to GameDev. Also I needed a tag, so I picked one somewhat relevant. If you guys recommend something else I'll be happy to switch it right over and delete this one.
PPPS: Sorry if repost, I have no idea how to google this question. I tried to no avail.
(Side note on handling collisions: Your obstacles are moving. Therefore, comparing the player's position to grid is no longer ever sufficient. Furthermore, you will always have to draw based on the object's current position. Both of these are unavoidable, but also not very expensive.)
You want the objects to be easy to look up, while still being able to draw them efficiently and, more importantly, quickly checking for collisions. This is easy to do: store the objects in the array, and for the X and Y positions keep indexes which allow for 1) efficiently querying ranges and 2) efficiently moving elements left and right (as their x and y positions change).
If your objects are going to be moving fairly slowly (that is, on any one timestep, it is unlikely for an object to pass very many other objects), your indexes can be arrays! When an object moves past another object (in X, for instance), you just need to check its neighbor in the X index array to see if they should swap places. Keep doing this until it does not need to swap. If they're moving slowly, the amortized cost of this will be very close to O(1). Querying ranges is very easy in an array; binary search for the first greater element, and also for the last smaller element.
Summary/Implementation:
(Fiddle at https://jsfiddle.net/LsfuLo9p/3/)
Initialize (O(n) time):
Make an array of your objects called Objs.
Make an array of (x position, reference to Objs) pairs, sorted in X, called Xs.
Make an array of (y position, reference to Objs) pairs, sorted in Y, called Ys.
For every element in Xs and Ys, tell the object in Objs its index in those arrays (so that Xs has indexes to Objs, and Objs has indexes to Xs.)
When an object moves up in Y (O(1) expected time per moving object, given that they're moving slowly):
Using Objs, find its index in Ys.
Compare it to the next highest value in Ys. If it's greater, swap them in Ys (and update their Y indices in Objs).
Repeat step 2 until you don't swap.
(It's easy to apply this to the other three directions.)
When the player moves (O(log n + k2) time, where k is the maximum number of items that can fit in a row or column):
Look in Xs for small, the smallest X above Player.X, and large, the largest X+width below Player.X. If large &leq; small, return the range [large, small].
Look in Ys for small, the smallest Y above Player.Y, and large, the largest Y+height below Player.Y. If large &leq; small, return the range [large, small].
If there are any intersections between these two ranges, then the player is colliding with that object.
(You can improve the time of this to O(log n + k) by using a hashmap to check for set intersections.)

Categories

Resources