I'm having a problem with finding N positions in a grid based on a central point and a max range.
I have this grid, each coordinate in the grid can be either closed or open, I'm tying to, using an open coordinate, find all the open coordinates around the first that are valid and the walk range between then is equal or lower than the max walk range.
At first I tried a solution using A*, where I would select every coordinate in range, check if they were valid, if they were I would call A* from them to the center position and count the number of steps, if they were higher than my max range I would just remove the coordinate from my list. Of course, this is really is slow for ranges higher than 3, or grids with more than just a few coordinates.
Then I tried recursively searching for the coordinates, starting with the central one, expanding and recursively checking for the coordinates validness. That solution proved to be the most effective, except that in my code, each function call was rechecking coordinates that were already checked and returning repeated values. I thought I could just share the 'checked' list between the functions calls, but that just broke my code, since a call was checking a coordinate and closing it even if it still had child nodes but were technically not in range of their center (but were in range of the first center) it would close them and give some weird results.
Finally, my question is, how do I do that? Is my approach wrong? Is there a better way of doing it?
Thanks.
I've implemented something similar once. The simple recursive solution you proposed worked fine for small walk ranges, but execution time spun out of control for larger values ...
I improved this by implementing a variant of Dijkstra's algorithm, which keeps a list of visited nodes as you said. But instead of running a full path search for every coordinate in reach like you did with A*, do the first N iterations of the algorithm if N is your walk range (the gif on Wikipedia might help you understand what I mean). Your list of visited nodes will then be the reachable tiles you're looking for.
http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
Related
This is a math question - but really I'm looking for some vocabulary to help me google what I'm trying to do here. God bless you if you manage to read this and understand what I'm talking about.
I have some javascript code that looks at the frequency data from a song as it is playing and turns the data into 2 numbers. These numbers change several times per second depending on what is going on in the song.
One number (upperAvgFr) represents the 'loudness' of the values coming from the average upper frequencies, and the other (lowerAvgFr) represents the loudness coming from the average lower frequencies. I modulate these values so they stay between 0 and 1.
I'm then taking these 2 numbers and translating them into visual movement on a 3d object (the objects are just glowing squares simple squares with some symbols on them). I make them 'pulse' - growing larger and smaller or spinning, changing color - based on the values of the upper frequency and lower frequency number.
So... all that said, I'm finding myself wanting to know some math tricks to play with the distribution.
If I use the upperAvgFr as a multiplier on the height and width of the graphical pulsing square - it would be very small when the value is close to 0, and full size when it is close to 1.
My question is, what are these type of functions called (and where are they found) that can redistribute or transform the values in these distributions so that it is not as even anymore.
Maybe the higher values all get closer to 1, and the lower values stay the same. All of them may be "nudged up" a small amount, but less for the lower numbers, and more for the higher numbers.
That would be one instance of a transform that might be useful, another instance might do the same thing in the opposite direction, or pull them all closer to the middle, etc.
Best example that I can find that is "kinda" like what I want are easing functions where things accelerate and move in non-linear ways: https://konvajs.org/docs/tweens/All_Easings.html
What kind of function am I describing here? Where can I find libraries of these functions? I imagine they could come in handy with tweening and easing. I imagine there are some that give "pleasant" feeling distributions. Some might be "choppier". or "bouncier." What words am I looking for her?
One of my friends proposed a problem where you have n number of buckets (a, b, c, ..., n), each with a certain percentage of your total balls. You are then given a breakdown of how many balls each bucket should have by the end of the problem. You are also supplied a machine that, in one move, can move unlimited balls from a singular bucket to another singular bucket (ex. 10 balls from bucket A to bucket C). What algorithm would you use to ensure you always have the least number of moves possible?
I got stumped by this. It looks like it could be solved using an extension of Euclid's algorithm, but I'm altogether unsure as to how I would solve this. I tested the obvious answer of trying to match the 2 largest/perfect problem buckets with each other but that doesn't work. Any pointers would be helpful.
Use A* on the graph of possible moves with the heuristic function: number of buckets different from their goal bucket divided by two. That is b/2 where b is the number of buckets which have the wrong amount in them.
A* is often thought of as a pathfinding algorithm for like routing units in an game, but it works on any graph that you can find an heuristic that never overestimates the distance to the target node. Most importantly, A* armed with such a heuristic will always find the shortest path. Borrowing from game theory, we can think of a graph, nodes connected by links, where each node is a possible distribution of balls among the buckets and each link represents a legal move the machine can make. Your starting node is the starting distribution of balls. Your target node is the target distribution of balls.
With this graph we could apply a breath-first search and the wait until we happen to visit the target node. This would give us a shortest set of moves from start distribution to target distribution, since it is a breath-first search. (This is more or less equivalent to Dijkstra's algorithm in this case because each move has the same cost (length) of 1 move)
However, we can do better. A* searches through a graph using a heuristic. Specifically, it is like the breath-first search except instead of next visiting an un-visited node nearest to the starting node, A* next visits an un-visited node which minimizes g(n) + h(n) where g(n) is the length from the unvisited node, n, to the starting node and h(n) is an heuristic for the distance from the unvisited node, n, to the goal. This means we spend less computational time hunting for the optimal path by going "away" from the goal as we check the obvious path to goal first. It has been mathematically proven that A* will give you the optimal path from your starting node to the goal node if you can find a heuristic that is admissible, meaning a heuristic that never overestimates the distance to goal. For example, in a video game, the length of a walkable path between two points is always greater or equal to the by-the-crow-flies distance. In spite of our graph not being a graph that represents physical space, we can still find an admissible heuristic.
A move can at best make 2 buckets have the correct number of balls, as a move can only effect two buckets at most. Thus, for example, if we count our buckets and see that 4 buckets have the wrong about of balls in them, then we know that at least 2 moves will be required. Quite possible more, but it can't be less than 2.
Lets make our heuristic be "the number of buckets which have the wrong number of balls divided by 2".
Our heuristic doesn't overestimate the number of moves required to get desired number of balls, since even the best kind of move where you match up happy pairs can happen for every move you only tie our heuristic. Our heuristic underestimates moves quite often but that is okay, the computation will just take longer but it won't get the wrong result, and the result plan for moves will still the shortest possible. Thus, our heuristic is admissible, meaning that it never overestimates the number of moves.
Therefore, A* with a heuristic of "the number of buckets which have the wrong number of balls divided by 2" will always find the shortest number of moves to reach the distribution.
Perhaps a better heuristic can be found, if so, then the search will go faster. This was merely the first heuristic I thought of.
I've deleted my previous answer. Have you considered the subset sum problem? I'm not sure if it will find the optimal solution for the overall problem, but here is the idea.
For each bucket, let the difference = (target number of balls) - (given number of balls). Let P be a set of positive differences and N be a set of absolute values of negative differences.
Choose the next maximum number from the two sets, and look for a minimum subset sum in the opposite set. If the subset is found, move the balls. Otherwise, look for the subset sum for the next maximum number from the sets.
After finishing looking for subsets, consider the remaining buckets. Move the balls between two of the remaining buckets from opposite sets. This will create a new difference for one of the buckets. It is possible that a new matching subset sum can be formed with the new difference. Search for the subset sum again using that newly formed difference. Repeat until all of the buckets have the target number of balls.
I'm not sure if always considering the largest buckets guarantees optimality. Let us know if you find the correct solution!
Consider the following polygon (an agricultural plot)
From this polygon, I would like to extract the "headlands" of the plot, being the consecutive lines (sides) of the polygon (Wikipedia) used for turning on the field. While often only the rows running perpendicular to the lay of the field are considered, I need all sides of the polygon.
Here, a consecutive line means any set of coordinates, where the angle between any two coordinates of the set is not larger than a value X (e.g 30 degrees).
For the given example, the resulting headlands should look like the following:
I wrote a small algorithm trying to accomplish this, basically checking the angle between two coordinates and either pushing the given coordinate to the existing lineString if the angle is below X degrees or creating a new lineString (headland) if not.
Check out the following Gist
However, in some cases corners of a field are rounded, therefore may consist of many coordinates within small distances of each other. The relative angles then may be less than the value X, even though the corner is too sharp to actually be cultivated without turning.
In order to overcome that issue, I added an index that increases whenever a coordinate is too close for comparison, so that the next coordinate will be checked against the initial coordinate. Check out the following Gist.
This works for simple plots like the one in the example, however I am struggling with more complex ones as the following.
Here, the bottom headland is recognised as one lineString together with the headland on the right, even though optically a sharp corner is given. Also, two coordinates in the upper right corner were found to be a separate headland even though they should be connected to the right headland. The result should therefore yield in the following:
What I would like to know is if there is an approach that efficiently decomposes any polygon into it's headlands, given a specific turning angle. I set up a repo for the code here, and an online testing page with many examples here if that helps.
So I am trying to make a little game both for practice and fun (first time I have tried) never had anything to do with it before..
You can see what I've tried so far at: http://myfirstgame.e-ddl.com/ been working on it for like 6-8 hours or so. So far and realized I would better ask before going on.
The way I have it now, I have a main loop that runs every 20 milliseconds or so. Ihis loop calls 2 functions:
Handle keystrokes (which iterates through the obstacles array and check if the player's future position collide any obstacle object and change the players' properties to the future position values).
It goes through the "need update" array and change the element's CSS details to reflect the changes made.
I have several questions:
Is the above a good idea to handle collision? if not what would be a better way (I mean at around 800-1500 obstacle objects on map the game slows down).
To calculate distance, I use distance between 2 points equation. If I only have 1 point, angle and distance. How can I find the 2nd point's (x, y)?
What would be better, canvas or DOM ? (not important question as I already have it done with DOM).
Thanks for everyone.
I've found the solution to what i were looking for.
about the collision the way i was doing it was entirely wrong i will list the correct way farther down.
about the distance - the solution i came up with is checking the distance from player's current position to object if player's step is bigger then distance, deduct step from distance and walk that distance.
as for canvas vs dom - seems both has their crons and pros.
Now for the collision
the correct way of doing it is to create a pixel map array. so if your canvas or container node is width:800 by height:500 you'll have a 2d array that represent those pixes
and then when i check for position i simply check if player's current position + steps toward future position has an object.
so like :
if(array[300][500]){
return false;
}
That's what i found out.
If anyone have a better solution then that let me know.
I am rendering a map out of SVG paths (using jVectormap).
There are cases where one region has to be merged with the neighboring region.
Unfortunately both regions don't touch each other and I have to interpolate to fill the space in between.
jVectormap uses very simple SVG paths with M to set the the absolute startpoint and l to connect relative points.
Does any of the SVG libraries cover such an operation?
I haven't tried this, but you may get around it by running the converter at jVectormap with the following parameters:
--buffer_distance=0
--where="ISO='region_1' OR ISO='region_2'"
Where region_1 and region_2 are the two regions that you need to merge.
Solving the problem this way also means that the generated SVG paths are true to the original coordinates, whereas a following fix may lead to some (probably minor) inconsistencies.
This might not be the kind of answer you're looking for, but using Raphael.js you could loop over the entire length of the path of one region getPointAtLength(), comparing it with all points of the second region. If the coordinates are closer than n pixels from any coordinates on the second region and the previous coordinates weren't, than that could be regarded a "glue" point. You would then jump to the second regio and start looping over it, if the next point is still closer than n points, than go in the opposite direction, if still closer change direction and go farther along the path till finding a point that's farther away from the original region than n pixels. Continue looping in that direction till once again finding a new "glue" point, where once again you will switch to the original region in the manner described and all points which weren't covered in this final loop could be discarded (or you could simply create a new shape based on the points you came across whilst looping over the length of the original region.
True enough, it's not the easiest script to make, but it should be quite do-able I believe, especially when you can use a function like getPointAtLength to find the points between the defined svg points (though you need to only 'record' the defined points, and that's sort of the hard path as Raphael.js doesn't excitedly have any functions which would help with this, still even that shouldn't be too hard to match up by hand (in code of course)).