How to optimize 2 dimensional matrix reading speed? - javascript

I'm working on a 2D game. I have the game map saved on a js object {} called gameMap. My problem is that reading an item on the matrix takes too long. For collision detection I usually have to check 10 or 20 items of the map matrix which takes around 1ms and having 10 characters on screen collision detection becomes the bottleneck of the app, taking 10ms of the 16ms each frame should last. Also when the map gets too big times scale up.
Let's say Map has 1000 x 1000 items. Right now if I want to check what is at position (-100,200) I check gameMap['-100'][200]. My idea is to divide the map in quadrants that would group 100 x 100 items. So to check (-100,200) I would test gameMap[quadrantName][-100][200]. This would mean that while gameMap would be about the same size it would work with a lot less items, and probably read speed would scale up in a far smaller proportion. Does anyone know if this would make reading faster? What else can I do to improve reading speed?

First of all, a 10000x10000 array of bytes will consume 100MB! Do you really need such a large array. Perhaps you'd be better of storing just the coordinates of all your elements...
As to your question - you could convert your 2d array to a 1d array and access all cells via
gameMap[y * 10000 + x]
where 10000 would be the 'width' of your map. So there would be no need to divide the map into quadrants.

Related

Find the largest rectangle that fits inside a polygon

I need to find the largest rectangle that can fit inside any polygon,
what i tried is dividing the svg to 2d grid and loop the 2d array to see if the current grid cell intersects with the polygon to create a new 2d binary array where intersection is 1 else 0
now i need to find the largest rectangle from that 2d array AND more importantly its location
as example:
if the 2d array is like this, i need to find the largest rect in that array and its x1,y1 (start i,j) and x2,y2 (end i,j).
well you can brute force the location and scan for the size which will be O(n^6) if n is the avg size of side of your map in pixels ...
The location might be speed up by search (accepting not strictly sorted data) for example like this:
How approximation search works
which would lead to ~O(n^4.log^2(n)). But beware the search must be configured properly in order to not skip solution ... The size search can be improved too by using similar technique like I did in here:
2D OBB
Just use different metric so I would create LUT tables of start and end positions for each x and y (4 LUT tables) which will speed up the search leading to ~O(n^2.log^2(n)) while creation of LUT is O(n^2). btw the same LUTs I sometimes use in OCR like here (last 2 images):
OCR and character similarity
Now problem with this approach is it can not handle concave polygon correctly as there might be more edges per x,y than just 2. So to remedy that you would need to have more LUTs and use them based on position in polygon (divide polygon to "convex" areas)
So putting all these together would look something like this:
approx loop (center x) // ~O(log(n))
approx loop (center y) // ~O(log(n))
grow loop (square size to max using) LUT // O(n)
{
grow loop (x size to max while decreasing original square y size) // O(n)
grow loop (y size to max while decreasing original square x size) // O(n)
use bigger from the above 2 rectangles
}
Just do not forget to use area of polygon / area of rectangle as approximation error value. This algo is resulting in ~O(n^2.log^2(n)) which is not great but still doable.
Another option is convert your polygon to squares, and use bin-packing and or graph and or backtracking techniques to grow to biggest rectangle ... but those are not my cup of tea so I am not confident enough to create answer about them.

Higher precision in JavaScript

I am trying to calculate with higher precision numbers in JavaScript to be able to zoom in more on the Mandlebrot set.
(after a certain amount of zooming the results get "pixelated", because of the low precision)
I have looked at this question, so I tried using a library such as BigNumber but it was unusably slow.
I have been trying to figure this out for a while and I think the only way is to use a slow library.
Is there a faster library?
Is there any other way to calculate with higher precision numbers?
Is there any other way to be able to zoom in more on the Mandlebrot set?
Probably unneceseary to add this code, but this is the function I use to check if a point is in the Mandlebrot set.
function mandelbrot(x, y, it) {
var z = [0, 0]
var c1 = [x, y]
for (var i = 0; i < it; i++) {
z = [z[0]*z[0] - z[1]*z[1] + c1[0], 2*z[0]*z[1] + c1[1]]
if (Math.abs(z[0]) > 2, Math.abs(z[1]) > 2) {
break
}
}
return i
}
The key is not so much the raw numeric precision of JavaScript numbers (though that of course has its effects), but the way the basic Mandelbrot "escape" test works, specifically the threshold iteration counts. To compute whether a point in the complex plane is in or out of the set, you iterate on the formula (which I don't exactly remember and don't feel like looking up) for the point over and over again until the point obviously diverges (the formula "escapes" from the origin of the complex plane by a lot) or doesn't before the iteration threshold is reached.
The iteration threshold when rendering a view of the set that covers most of it around the origin of the complex plane (about 2 units in all directions from the origin) can be as low as 500 to get a pretty good rendering of the whole set at a reasonable magnification on a modern computer. As you zoom in, however, the iteration threshold needs to increase in inverse proportion to the size of the "window" onto the complex plane. If it doesn't, then the "escape" test doesn't work with sufficient accuracy to delineate fine details at higher magnifications.
The formula I used in my JavaScript implementation is
maxIterations = 400 * Math.log(1/dz0)
where dz0 is (arbitrarily) the width of the window onto the plane. As one zooms into a view of the set (well, the "edge" of the set, where things are interesting), dz0 gets pretty small so the iteration threshold gets up into the thousands.
The iteration count, of course, for points that do "escape" (that is, points that are not part of the Mandelbrot set) can be used as a sort of "distance" measurement. A point that escapes within a few iterations is clearly not "close to" the set, while a point that escapes only after 2000 iterations is much closer. That distance quality can be used in various ways in visualizations, either to provide a color value (common) or possibly a z-axis value if the set is being rendered as a 3D view (with the set as a sort of "mesa" in three dimensions and the borders being a vertical "cliff" off the sides).

ClojureScript: Get Average RGBA Color from ImagaData

I'm trying to write a function in ClojureScript, which returns the average RGBA value of a given ImageData Object.
In JavaScript, implementations for this problem with a "for" or "while" loop are very fast. Within milliseconds they return the average of e.g. 4000 x 4000 sized ImageData objects.
In ClojureScript my solutions are not approximately as fast, sometimes the browser gives up yielding "stack trace errors".
However the fastest one I wrote until now is this one:
(extend-type js/Uint8ClampedArray
ISeqable
(-seq [array] (array-seq array 0)))
(defn average-color [img-data]
(let [data (.-data img-data)
nr (/ (count data) 4)]
(->> (reduce (fn [m v] (-> (update-in m [:color (rem (:pos m) 4)] (partial + v))
(update-in [:pos] inc)))
{:color [0 0 0 0] :pos 0}
data)
(:color)
(map #(/ % nr)))))
Well, unfortunately it works only upt to values around 500x500, which is not acceptable.
I'm asking myself what exactly is the problem here. What do I have to attend in order to write a properly fast average-color function in ClojureScript.
The problem is that the function you have defined is recursive. I am not strong in clojurescript so I will not tell you haw to fix the problem in code but in concept.
You need to break the problem into smaller recursive units. So reduce a pixel row to get a result for each row, then reduce the row results. This will prevent the recursion from overflowing the call stack in javascript.
As for the speed, That will depend on how accurate you want the result to be, I would use a random sample selecting 10% of pixels randomly and using the average of that result.
You could also just use the hardware and scale the image by half, render it with smoothing on and then halving again until you have one pixel and use the value of that pixel. That will give you a pixel value average and is very fast but only does a value mean, not a photon mean.
I will point out that the value of the RGB channels are logarithmic and represent the square root of the photon count captured (for photo) or emitted (by screen). Thus the mean of the pixel values is much lower than the mean of the photon count. To get the correct mean you must get the mean of the square of each channel and then get the square root of the mean to bring it back to the logarithmic scale that is used for the RGB values.

Organizational system for moving tiles in grid-based level

conceptual problem here.
I have an array which will be rendered to display tiles in a grid. Now, I want these tiles to be able to move - but not just around in the grid. Per-pixel. It does need to be a grid, because I need to shift whole rows of tiles, and be able to access tiles by their position, but it also needs to have per-pixel adjustment, while still keeping the "grid" up to date. Picture a platforming game with moving tiles.
There are a few organizational systems with which I could do this, and I'll outline a few I thought of as well as their pros and cons (XY-style) in case it helps you understand what I'm saying. I'm asking if you think one of these is best, or think of a better way.
One way would be to place objects in the array with the properties xOffset and yOffset. I would then render them in their tile position plus their offset. (x * tileWidth + tile.xOffset). Pros: maintains vanilla grid-system. Cons: Then I would have to adjust each tile to its actual grid location once it moved. Also, the "grid" position would become a bit confused as tiles are moving. (Side note: If you think this is a good way, how would I handle collisions? It wouldn't be as simple as player.x / tileWidth anymore.)
Another would be to place lots of objects with xs and ys and render them all. Pros: Simple. Cons: Then I would have to check each one to see if it's in the row I want to shift before doing so. Also, collisions could not simply check the one tile a player is on, they would have to check all entities.
Another I thought of would be a sort of combination of the two. Tiles would be in the original array and get render as x * tileWidth normal tiles. Then, when they move, they are deleted from the grid and placed in a separate array for moving tiles, where their x and y are stored. Then the collisions would check the grid the fast way and the moving tiles the slow way.
Thanks!
PS: I'm using JavaScript, but it shouldn't be relevant.
PPS: Forgive me if it's not Stack Overflow material. This was the best fit, I thought. It's not exactly code review, but it's not specific to GameDev. Also I needed a tag, so I picked one somewhat relevant. If you guys recommend something else I'll be happy to switch it right over and delete this one.
PPPS: Sorry if repost, I have no idea how to google this question. I tried to no avail.
(Side note on handling collisions: Your obstacles are moving. Therefore, comparing the player's position to grid is no longer ever sufficient. Furthermore, you will always have to draw based on the object's current position. Both of these are unavoidable, but also not very expensive.)
You want the objects to be easy to look up, while still being able to draw them efficiently and, more importantly, quickly checking for collisions. This is easy to do: store the objects in the array, and for the X and Y positions keep indexes which allow for 1) efficiently querying ranges and 2) efficiently moving elements left and right (as their x and y positions change).
If your objects are going to be moving fairly slowly (that is, on any one timestep, it is unlikely for an object to pass very many other objects), your indexes can be arrays! When an object moves past another object (in X, for instance), you just need to check its neighbor in the X index array to see if they should swap places. Keep doing this until it does not need to swap. If they're moving slowly, the amortized cost of this will be very close to O(1). Querying ranges is very easy in an array; binary search for the first greater element, and also for the last smaller element.
Summary/Implementation:
(Fiddle at https://jsfiddle.net/LsfuLo9p/3/)
Initialize (O(n) time):
Make an array of your objects called Objs.
Make an array of (x position, reference to Objs) pairs, sorted in X, called Xs.
Make an array of (y position, reference to Objs) pairs, sorted in Y, called Ys.
For every element in Xs and Ys, tell the object in Objs its index in those arrays (so that Xs has indexes to Objs, and Objs has indexes to Xs.)
When an object moves up in Y (O(1) expected time per moving object, given that they're moving slowly):
Using Objs, find its index in Ys.
Compare it to the next highest value in Ys. If it's greater, swap them in Ys (and update their Y indices in Objs).
Repeat step 2 until you don't swap.
(It's easy to apply this to the other three directions.)
When the player moves (O(log n + k2) time, where k is the maximum number of items that can fit in a row or column):
Look in Xs for small, the smallest X above Player.X, and large, the largest X+width below Player.X. If large &leq; small, return the range [large, small].
Look in Ys for small, the smallest Y above Player.Y, and large, the largest Y+height below Player.Y. If large &leq; small, return the range [large, small].
If there are any intersections between these two ranges, then the player is colliding with that object.
(You can improve the time of this to O(log n + k) by using a hashmap to check for set intersections.)

Click detection in a 2D isometric grid?

I've been doing web development for years now and I'm slowly getting myself involved with game development and for my current project I've got this isometric map, where I need to use an algorithm to detect which field is being clicked on. This is all in the browser with Javascript by the way.
The map
It looks like this and I've added some numbers to show you the structure of the fields (tiles) and their IDs. All the fields have a center point (array of x,y) which the four corners are based on when drawn.
As you can see it's not a diamond shape, but a zig-zag map and there's no angle (top-down view) which is why I can't find an answer myself considering that all articles and calculations are usually based on a diamond shape with an angle.
The numbers
It's a dynamic map and all sizes and numbers can be changed to generate a new map.
I know it isn't a lot of data, but the map is generated based on the map and field sizes.
- Map Size: x:800 y:400
- Field Size: 80x80 (between corners)
- Center position of all the fields (x,y)
The goal
To come up with an algorithm which tells the client (game) which field the mouse is located in at any given event (click, movement etc).
Disclaimer
I do want to mention that I've already come up with a working solution myself, however I'm 100% certain it could be written in a better way (my solution involves a lot of nested if-statements and loops), and that's why I'm asking here.
Here's an example of my solution where I basically find a square with corners in the nearest 4 known positions and then I get my result based on the smallest square between the 2 nearest fields. Does that make any sense?
Ask if I missed something.
Here's what I came up with,
function posInGrid(x, y, length) {
xFromColCenter = x % length - length / 2;
yFromRowCenter = y % length - length / 2;
col = (x - xFromColCenter) / length;
row = (y - yFromRowCenter) / length;
if (yFromRowCenter < xFromColCenter) {
if (yFromRowCenter < (-xFromColCenter))--row;
else++col;
} else if (yFromRowCenter > xFromColCenter) {
if (yFromRowCenter < (-xFromColCenter))--col;
else++row;
}
return "Col:"+col+", Row:"+row+", xFC:"+xFromColCenter+", yFC:"+yFromRowCenter;
}
X and Y are the coords in the image, and length is the spacing of the grid.
Right now it returns a string, just for testing.. result should be row and col, and those are the coordinates I chose: your tile 1 has coords (1,0) tile 2 is(3,0), tile 10 is (0,1), tile 11 is (2,1). You could convert my coordinates to your numbered tiles in a line or two.
And a JSFiddle for testing http://jsfiddle.net/NHV3y/
Cheers.
EDIT: changed the return statement, had some variables I used for debugging left in.
A pixel perfect way of hit detection I've used in the past (in OpenGL, but the concept stands here too) is an off screen rendering of the scene where the different objects are identified with different colors.
This approach requires double the memory and double the rendering but the hit detection of arbitrarily complex scenes is done with a simple color lookup.
Since you want to detect a cell in a grid there are probably more efficient solutions but I wanted to mention this one for it's simplicity and flexibility.
This has been solved before, let me consult my notes...
Here's a couple of good resources:
From Laserbrain Studios, The basics of isometric programming
Useful article in the thread posted here, in Java
Let me know if this helps, and good luck with your game!
This code calculates the position in the grid given the uneven spacing. Should be pretty fast; almost all operations are done mathematically, using just one loop. I'll ponder the other part of the problem later.
def cspot(x,y,length):
l=length
lp=length+1
vlist = [ (l*(k%2))+(lp*((k+1)%2)) for k in range(1,y+1) ]
vlist.append(1)
return x + sum(vlist)

Categories

Resources