Possible To PathFind a 1000x1000 Grid Map NodeJS? - javascript

I am making a game with a Node server which uses pathfinding for the enemies. I was using a 100x100 grid map and I did not see any slowdowns on performance, but when I raised the size to 1000x1000, each time a path is generated there is now a 1 second delay on the server.
Currently I am using PathFindingjs with A* path finding. Is there a better path finding library or path finding algorithm that will allow the use of a 1000x1000 grid without a delay, or am I out of luck?
Any help is appreciated, thank you.

What do you mean by "delay"? Like, it took longer to process a larger grid when nothing else was happening? Or, the processing "froze" while the path was calculated and then continued on?
Taking longer to process is natural for a large processing space. More cells is more compute power needed. There's no way around that, other than other CPU cores or some sort of processing service. That might be an answer to your question right there.
Nodejs is a single-threaded system, so all that processing will hang up the other actions that are going on. There might be ways to run chunks of the path processing that doesn't noticeably affect other things - unsure how the lib is built. Or chunk the grid to more manageable segments for the pathing algorithm (would 4 500x500 grids be almost the same?, that kind of thing). Or have two different servers on the same machine - pathing and other, and segment your requests.

Related

How do images impact the browser memory relative to JS?

I am curious about how media such as images relate to memory in the browser and the JavaScript heap. Explicitly, how many JS objects would be equivalent to a 2MB image?
Disclaimer - I acknowledge that question is too vague for a precise answer. I will provide some arbitrary constraints but ultimately I’m looking for a general way to look at this problem. Suppose I am building a chat app and considering keeping all the messages ever sent in memory, how might I calculate a tipping point for a reasonable number of messages to keep in memory? How do I factor in that some messages are images?
You may assume
The browser is chrome.
The image is in jpg format and loaded using new Image() in JS and then inserted into DOM.
Each object has about three to five key-value pairs. Key-value pairs are strings or numbers ranging from small ints to strings of about 10 ascii chars.
Suppose the 2MB image is visible on screen.
My intuition tells me that the memory cost of an image of 2MB is way less than the cost of JSObjects * (2MB / size_of_obj_in_bytes), because:
The bytes of the image are not in JS land and therefore somehow compressed and optimised away.
The objects will not exist in silo but references will be created throughout user code creating more memory when passed through functions.
There will be lots of garbage collection overhead.
I don't know for certain that I'm right and I certainly don't know by how much or even how to begin measuring this. And since you can't optimize what you can't measure...
Disclaimer 2 - Premature optimization is the root of all evil etc etc. Just curious about digging deeper into the internals.

Box2D rope, Javascript vs Objective-C

I had a question for the experienced fellers. I'm trying to produce a game where you move an object with a chain hanging below it, I'm using Box2DWeb and EaselJS with HTML5/CSS and I plan on wrapping it with PhoneGap once I get it running properly. I've been testing on OSX Google Chrome which works great, and iOS Safari and have found I am already running into a performance issue on the iPhone with the chain - having profiled it, it is the biggest culprit.
It is a series of 25 small bodies linked together by revolute joints. I've played with a ton of different methods (including rope joints) and this is the way I get the least stretch and bounce (I want it to be a rope). I wondered for a start - does anybody know of a better way to produce rope with Box2D? And for two, other than reducing step iterations, reducing link bodies etc, is there any way to do it without sucking performance?
And my MAIN question for the guys who know a bit about PhoneGap/JS games - is a 25body chain at 30fps asking too much of this implementation? Or might I get away with it?
I know AS3.0 well and JS 'OK', I think starting over in ObjectC/C++ will turn this into a year long project as I don't even know the first thing to ask Google...
Thanks in advance!
Josh
I have found in our own projects ( C++ based ) that amount of vertices on dynamic bodies heavily affect the performance ( iOS devices are not between the best performing ones ). In your case I assume its going to be 25 square bodies (4 vertices each), plus the body at the end of the chain, which are all active at the same time. All that is going to affect the performance quite a bit.
I would try to fiddle around with rope joint instead. The only other thing I can think of is if you are using squares as a links in the chain, try using circles. I found that they are much better in performance, but the behavior of the chain will change. You can put limits on the revolute joints to contols that through.

More efficient way to build/transfer a large data structure?

I'm building a mobile Boggle-type web app with node.js. I'm trying find a more efficient way to load/build a massive dictionary (180,000+ words). I currently have it working but the load time is slightly long. Users have to wait about 15 seconds for the entire thing to build and some users time-out before the entire thing has loaded. I was wondering if anyone has any tips to improve the speed.
The way I'm currently doing this (which is probably completely inefficient):
I broke down the list into 26 arrays, one for each letter, and stuck each array in it's own javascript file.
When the app loads it runs a recursive function which gets the next js file and loads in the array from it overwriting the previous one. And then, it loops through the entire array and loads each new word into my Trie datastructure.
The files with the arrays in them combined are around 2mb. After being combined the datastructure itself clocks in at round 12mb, which isn't so bad on a desktop computer, but does weigh down a couple of my users' smartphones.
This needs to be built on the client side to allow instant lookups. The way I'm doing it currently works but I know there has to be a better way.
The other other tactic is to convert your recursive code into non-recursive code that uses an explicit stack, saving only the objects you actually need.
Have you tried profiling your code?
To answer the question of the fastest loading time, are you doing it in this fashion? (aka, without more code, we can't possibly know)
function LoadFiles(fileArray){
file = fileArray.slice(); //get the first file.
$.ajax(file).success(function(data){
/* yes, my object is a little funky, I'm focused on writing pseudocode */
wordLibraryAdd(data);
if (fileArray.length) // on a zero length quit processing
setTimeout(function(){ LoadFiles(fileArray) }, 50) //a 50 ms buffer between each loading isn't bad.
})
}

HTML canvas performance when drawing lots of lines

I'm currently writing an application that displays a lot, and I mean, a lot of 2D paths (made of hundreds, thousands of tiny segments) on an HTML5 canvas. Typically, a few million points. These points are downloaded from a server into a binary ArrayBuffer.
I probably won't be using that many points in the real world, but I'm kinda interested in how I could improve the performance. You can call it curiosity if you want ;)
Anyway, I've tested the following solutions :
Using gl.LINES or gl.LINE_STRIP with WebGL, and compute everything in shaders on the GPU. Currently the fastest, can display up to 10M segments without flinching on my Macbook Air. But there are very strict constraints for the binary format if you want to avoid processing things in JavaScript, which is slow.
Using Canvas2D, draw a huge path with all the segments in one stroke() call. When I'm getting past 100k points, the page freezes for a few seconds before the canvas is updated. So, not working here.
Using Canvas2D, but draw each path with its own stroke() call. Despite what others have been saying on the internet, this is much faster than drawing everything in one call, but still a lot slower than WebGL. Things start to get bad when I reach about 500k segments.
The two Canvas2D solutions require looping through all the points of all the paths in JavaScript, so this is quite slow. Do you know of any method(s) that could improve JavaScript's iteration speed in an ArrayBuffer, or processing speed in general?
But, what's strange is, the screen isn't updated immediately after all the canvas draw calls have finished. When I start getting to the performance limit, there is a noticeable delay between the end of the draw calls and the update of the canvas. Do you have any idea where that comes from, and is there a way to reduce it?
First, WebGL was a nice and hype idea, but the amount of processing required to decode and display the binary data simply doesn't work in shaders, so I ruled it out.
Here are the main bottlenecks I've encountered. Some of them are quite common in general programming, but it's a good idea to remember them :
It's best to use multiple, small for loops
Create variables and closures at the highest level possible, don't create them inside the for loops
Render your data in chunks, and use setTimeout to schedule the next chunk after a few milliseconds : that way, the user will still be able to use the UI
JavaScript objects and arrays are fast and cheap, use them. It's best to read/write them in sequential order, from the beginning to the end.
If you don't write data sequentially in an array, use objects (because non-sequential read-writes are cheap for objects) and push the indexes into an index array. I used a SortedList implementation to keep the indexes sorted, which I found here. Overhead was minimal (about 10-20% of the rendering time), and in the end it was well worth it.
That's about everything I remember. If I do find something else, I'll update this answer!

Pathfinding: How to create path data for the pathfiding algorithm?

I realize this is not strictly related to programming problems but as SO is the best resource for programming related problems, I decided to try it out. :)
I have a project where I need to do 3D pathfinding with javascript, inside a building. Dijkstra algorithm is probably the best case for this, as it handles irregular shapes quite nicely.
However, the problem is this:
Dijkstra requires node structure for it to work. But how to create that data? Obviously some sort of conversion need to be done from the basedata, but how to create that basedata? Going through the blueprint, getting x & y values for each possible path node, calculating the distances by hand seems bit excessive... And prone for swearwords...
I was even thinking of using Google Scetchup for this. Drawing lines for each possible path, but then the problem is getting the path data out from it. :/
I can't be the first person to have this problem... Any ideas? Are there any ready-made tools for creating path data?
Could not find any ready made tools so I ended up creating the path data as lines in Google SketchUp, exporting them Collada files and writing my own converter for the Collada XML data.
This can all be done in code by constructing a 3d grid and removing cubes that intersect with 3d objects.
I would then layer multiple 3d grids (doubling in size each time) that gives a more general idea of reachability (constructed from smaller grids), then by sheer virtue of path finding algorithms you will always find the most efficient path from A-B that will automatically direct the path using the largest cells (and therefore the fewest calculation steps). Note: make the larger 3d grids have a slightly lower weighting so that it's paths are favoured.
This can be used for many applications. For example if you could only walk on the ground, then simply remove blocks in unreachable areas.

Categories

Resources