Connecting Rooms - javascript

I've created a simple algorithm for a game I'm working on that creates a cave like structure. The algorithm outputs a 2 dimensional array of bits that represent the open area's. Example:
000000000000000000000000
010010000000000111100000
011110000000011111111000
011111110000011111111100
011111111001111111111110
011000000000001111000000
000000000000000000000000
(0's represent wall, 1's represent open areas)
The problem is that the algorithm can sometimes create a cave that has 2 non connected sections (as in the above example). I've written a function that gives me an array of arrays that contain all the x, y positions of the open spots for each area
My question is, given a number of lists that contain all of the x,y coordinates for each open area what is the fastest way to "connect" these area's be a corridor that is a minimum of 2 thickness wide.
(I'm writing this in javascript but even just pseudo code will help me out)
I've tried comparing the distances from every point in one area to every other area in another area, finding the two points that have the closest distance then cutting out a path from those 2 two points but this approach is way to slow I'm hoping there is another way.

Given two caves A and B, choose a point x in A and y in B (at random will do, the two closest or locally closest is better). Drill a corridor of thickness 2 between A and B (use Bresenham's algorithm). If you have multiple disconnected caves, do the above for each edge (A,B) of the minimal spanning tree of the graph of all the caves (edge weight is the length of the corridor you'll drill if you choose this edge).
Edit for the edit: to approximate the distance between two caves, you can use hill climbing. It will return the global minimum for convex caves in O(n) rather than the naive O(n2). For non-convex caves, do multiple iterations of hill climbing with initial guess chosen in random.

If you need the exactly minimal solution, you can consider first building the frontiers of your caves and then applying O(nm) algorithm. This will eliminate the need to compare distances between interior points of your caves. Then as soon as you know the distances between each pair of caves, you build the minimal spanning tree, then you drill your tunnels.

Since I don't know too much from your description, here are some hints I would consider:
How do you look for the pair of nearest points? Do you use a naive brute-force approach and thus obtain a run time of O(n*n)? Or are you using a more efficient variant taking O(n log n) time?
If you have obtained the closest points, I'd use a simple line-drawing algorithm.
Another approach might be that you generate a structure that definitely has only one single connected area. Therefore you could do the following: First you take a random cell (x,y) and set it to 1. Then, you traverse all it's neighbours and for each of them you randomly set it to 1 or leave it at 0. For each cell set to 1, you do the same, i.e. you traverse it's neighbours and set them randomly to 1 or 0. This guarantees that you won't have two separate areas.
An algorithm to ensure this could be the following (in python):
def setCell(x,y,A):
if x>=len(A) or y>=len(A[0]) or x<0 or y<0:
return
A[x][y] = 1
def getCell(x,y,A):
if x>=len(A) or y>=len(A[0]) or x<0 or y<0:
return 1
return A[x][y]
def generate(height, width):
A = [[0 for _ in xrange(width)] for _ in xrange(height)]
from random import randint
import Queue
(x,y) = (randint(0, height-1), randint(0, width-1))
setCell (x,y,A)
q = Queue.Queue()
q.put((x,y))
while not q.empty():
(x,y) = q.get()
for (nx, ny) in [(x+1,y), (x-1,y), (x,y+1), (x,y-1)]:
if randint(0,8)<=6:
if getCell(nx,ny,A)==0:
setCell(nx,ny,A)
if randint(0,2)<=1:
q.put((nx,ny))
return A
def printField(A):
for l in A:
for c in l:
print (" " if c==1 else "X"),
print ""
Then printField(generate(20,30)) does the job. Probably you'll have to adjust the parameters for random stuff so it fits your needs.

Related

What would be an algorithm to check if two people would meet as they traverse a graph using dijkstra's algorithm?

Imagine 2 people starting a two different starting points in a 2D matrix. They would also have separate end points when they traverse the matrix. Traversing from two points has a corresponding difficulty (weight). Using Djikstra's algorithm, we can determine what would be the best route to take for both person to reach their destination. Given that a person can only move to one node at a time and both persons move simultaneously, what would be a good algorithm to determine if they would bump to each other as they traverse the matrix?
When you perform two Djikstra's searches from two initial vertices, you update dist[] and prev[] arrays (looking at Wiki pseudocode)
Add additional array steps[] (for distance in number of edges), and when you modify prev[v] ← u, also make steps[v] = steps[u] + 1
When you read shortest path by reverse iterations, compare if vertices of intersection of two paths contain the same value in steps
For example, you found that paths intersect in vertices 3 and 7. But A_steps[3] = 3, B_steps[3] = 2 - so persons walk through this cell in differen moments. And if A_steps[7] = 6, B_steps[7] = 6 means that persons do meet in this node at the sixth step.

Group nodes which are visually together

I am implementing force-directed graph in d3js.
I want to divide my graph into two halves and colour both the halves with different colour, after the network has been rendered and forceSimulation has completed.
What I am looking for is explained in image.
I am refering here.
I don't want to update the group field into my data as described in the link because my links are changing dynamically on several events which is also changing the orientation of the network and updating group field into the data is creating the groups of same nodes whether they are near or far from each other.
Currently, I am using the window coordinates to divide this.
const screenWidth = window.screen.availWidth;
const halfScreen = screenWidth / 2;
nodes.selectAll().attr("fill", function (d) {
return d.x < halfScreen ? "blue" : "green";
});
But this is not the good idea. I would love to know any other way that is possible to do this.
So, my, interpretation of your question: you want to divide the nodes into two groups. Preferably each with half of the nodes, in which the distances between the nodes in each group is as small as possible.
The best algorithms for this that I know of are algorithms for constructing a "minimum spanning tree", for example, Kruskal's algorithm.
Adapting the algorithm to your problem, you start with (a copy of) the graph, having no edges. You then add the edges, sorted by length, smallest first. You stop doing this as soon as you have exactly two connected components. These connected components form groups in which nodes have a small mutual distance.
However, the groups probably won't have the same number of nodes, and I don't guarantee that this gives you the smallest mutual distance.
EDIT:
If there is more than 1 connected component, you could group them by starting with two empty groups and repeatedly adding a component (largest first) to the group that has the smallest number of nodes. This will probably give you more or less equal groups.

Finding the largest possible scaling factor for squares that are box packed given the dimensions of the box

My problem is as follows:
I have a set of values V1, V2, ... Vn
I have a function f(V) = g * V, where g is a scaling factor, that maps these values to another set of values A1, A2, ... An. These values correspond to areas of squares.
I also have W (width) and H (height) variables. And finally, I have a box packing algorithm (This one to be specific), that takes the W and H variables, and the A1 ... An areas, and tries to find a way to pack the areas into a box of size W x H. If the areas A are not too big, and the box packing algorithm successfully manages to fit the areas into the box, it will return the positions of the squares (the left-top coordinates, but this is not relevant). If the areas are too big, it will return nothing.
Given the values V and the dimensions of the box W and H, what is the highest value of g (the scaling factor in f(V)) that still fits the box?
I have tried to create an algorithm that initally sets g to (W x H) / sum(V1, V2, ... Vn). If the values V are distributed in such a way that they fit exactly into the box without leaving any space in between, this would give me a solution instantly. In reality this never happens, but it seems like a good starting point. With this initial value of g I would calculate the values A which are then fed to the box packing algorithm. The box packing algorithm will fail (return nothing), after which I decrease g by 0.01 (a completely arbitrary value established by trial and error) and try again. This cycle repeats until the box packing algorithm succeeds.
While this solution works, I feel like there should be faster and more accurate ways to determine g. For example, depending on how big W and H compared to the sum of the values V, it seems that there should be a way to determine a better value than 0.01, because if the difference is extremely big the algorithm would take really long, while if the difference is extremely small it would be very fast but very crude. In addition, I feel like there should be a more efficient method than just brute-forcing it like this. Any ideas?
You're on a good trail with your method I think !
I think you shouldn't decrease your value by a fixed amount but rather try to approach the value by ever smaller steps.
It's good because you have a good starting value. First you could decrease g by something like 0.1 * g, check if your packing succeeds, if not, continue to decrease with same step, else if it packs correctly increase g with a smaller step (like step = step / 2)
At some point your steps will become very small and you can stop searching (defining "small" is up to you)
You can use binary search approach. If you have two values of g, so that for one (g1) packing exists and for second (g2) packing doesn't exist, try value on half way h=(g1+g2)/2. If packing exists for h you get new larger final g, and you can make same check with h and g2. If packing doesn't exist you can make same check with values g1 and h.
With each step, interval of possible result max value, is halfed. You can get final result as precise as you like, with more iterations.

ClojureScript: Get Average RGBA Color from ImagaData

I'm trying to write a function in ClojureScript, which returns the average RGBA value of a given ImageData Object.
In JavaScript, implementations for this problem with a "for" or "while" loop are very fast. Within milliseconds they return the average of e.g. 4000 x 4000 sized ImageData objects.
In ClojureScript my solutions are not approximately as fast, sometimes the browser gives up yielding "stack trace errors".
However the fastest one I wrote until now is this one:
(extend-type js/Uint8ClampedArray
ISeqable
(-seq [array] (array-seq array 0)))
(defn average-color [img-data]
(let [data (.-data img-data)
nr (/ (count data) 4)]
(->> (reduce (fn [m v] (-> (update-in m [:color (rem (:pos m) 4)] (partial + v))
(update-in [:pos] inc)))
{:color [0 0 0 0] :pos 0}
data)
(:color)
(map #(/ % nr)))))
Well, unfortunately it works only upt to values around 500x500, which is not acceptable.
I'm asking myself what exactly is the problem here. What do I have to attend in order to write a properly fast average-color function in ClojureScript.
The problem is that the function you have defined is recursive. I am not strong in clojurescript so I will not tell you haw to fix the problem in code but in concept.
You need to break the problem into smaller recursive units. So reduce a pixel row to get a result for each row, then reduce the row results. This will prevent the recursion from overflowing the call stack in javascript.
As for the speed, That will depend on how accurate you want the result to be, I would use a random sample selecting 10% of pixels randomly and using the average of that result.
You could also just use the hardware and scale the image by half, render it with smoothing on and then halving again until you have one pixel and use the value of that pixel. That will give you a pixel value average and is very fast but only does a value mean, not a photon mean.
I will point out that the value of the RGB channels are logarithmic and represent the square root of the photon count captured (for photo) or emitted (by screen). Thus the mean of the pixel values is much lower than the mean of the photon count. To get the correct mean you must get the mean of the square of each channel and then get the square root of the mean to bring it back to the logarithmic scale that is used for the RGB values.

Organizational system for moving tiles in grid-based level

conceptual problem here.
I have an array which will be rendered to display tiles in a grid. Now, I want these tiles to be able to move - but not just around in the grid. Per-pixel. It does need to be a grid, because I need to shift whole rows of tiles, and be able to access tiles by their position, but it also needs to have per-pixel adjustment, while still keeping the "grid" up to date. Picture a platforming game with moving tiles.
There are a few organizational systems with which I could do this, and I'll outline a few I thought of as well as their pros and cons (XY-style) in case it helps you understand what I'm saying. I'm asking if you think one of these is best, or think of a better way.
One way would be to place objects in the array with the properties xOffset and yOffset. I would then render them in their tile position plus their offset. (x * tileWidth + tile.xOffset). Pros: maintains vanilla grid-system. Cons: Then I would have to adjust each tile to its actual grid location once it moved. Also, the "grid" position would become a bit confused as tiles are moving. (Side note: If you think this is a good way, how would I handle collisions? It wouldn't be as simple as player.x / tileWidth anymore.)
Another would be to place lots of objects with xs and ys and render them all. Pros: Simple. Cons: Then I would have to check each one to see if it's in the row I want to shift before doing so. Also, collisions could not simply check the one tile a player is on, they would have to check all entities.
Another I thought of would be a sort of combination of the two. Tiles would be in the original array and get render as x * tileWidth normal tiles. Then, when they move, they are deleted from the grid and placed in a separate array for moving tiles, where their x and y are stored. Then the collisions would check the grid the fast way and the moving tiles the slow way.
Thanks!
PS: I'm using JavaScript, but it shouldn't be relevant.
PPS: Forgive me if it's not Stack Overflow material. This was the best fit, I thought. It's not exactly code review, but it's not specific to GameDev. Also I needed a tag, so I picked one somewhat relevant. If you guys recommend something else I'll be happy to switch it right over and delete this one.
PPPS: Sorry if repost, I have no idea how to google this question. I tried to no avail.
(Side note on handling collisions: Your obstacles are moving. Therefore, comparing the player's position to grid is no longer ever sufficient. Furthermore, you will always have to draw based on the object's current position. Both of these are unavoidable, but also not very expensive.)
You want the objects to be easy to look up, while still being able to draw them efficiently and, more importantly, quickly checking for collisions. This is easy to do: store the objects in the array, and for the X and Y positions keep indexes which allow for 1) efficiently querying ranges and 2) efficiently moving elements left and right (as their x and y positions change).
If your objects are going to be moving fairly slowly (that is, on any one timestep, it is unlikely for an object to pass very many other objects), your indexes can be arrays! When an object moves past another object (in X, for instance), you just need to check its neighbor in the X index array to see if they should swap places. Keep doing this until it does not need to swap. If they're moving slowly, the amortized cost of this will be very close to O(1). Querying ranges is very easy in an array; binary search for the first greater element, and also for the last smaller element.
Summary/Implementation:
(Fiddle at https://jsfiddle.net/LsfuLo9p/3/)
Initialize (O(n) time):
Make an array of your objects called Objs.
Make an array of (x position, reference to Objs) pairs, sorted in X, called Xs.
Make an array of (y position, reference to Objs) pairs, sorted in Y, called Ys.
For every element in Xs and Ys, tell the object in Objs its index in those arrays (so that Xs has indexes to Objs, and Objs has indexes to Xs.)
When an object moves up in Y (O(1) expected time per moving object, given that they're moving slowly):
Using Objs, find its index in Ys.
Compare it to the next highest value in Ys. If it's greater, swap them in Ys (and update their Y indices in Objs).
Repeat step 2 until you don't swap.
(It's easy to apply this to the other three directions.)
When the player moves (O(log n + k2) time, where k is the maximum number of items that can fit in a row or column):
Look in Xs for small, the smallest X above Player.X, and large, the largest X+width below Player.X. If large &leq; small, return the range [large, small].
Look in Ys for small, the smallest Y above Player.Y, and large, the largest Y+height below Player.Y. If large &leq; small, return the range [large, small].
If there are any intersections between these two ranges, then the player is colliding with that object.
(You can improve the time of this to O(log n + k) by using a hashmap to check for set intersections.)

Categories

Resources