Find in Multidiamentional Array - javascript

I have an multi dimensional array as
[
{"EventDate":"20110421221932","LONGITUDE":"-75.61481666666670","LATITUDE":"38.35916666666670","BothConnectionsDown":false},
{"EventDate":"20110421222228","LONGITUDE":"-75.61456666666670","LATITUDE":"38.35946666666670","BothConnectionsDown":false}
]
Is there any plugin available to search for combination of LONGITUDE,LATITUDE?
Thanks in advance

for (var i in VehCommLost) {
var item = VehCommLost[i];
if (item.LONGITUDE == 1 && item.LATITUDE == 2) {
//gotcha
break;
}
}

this is json string..which programming language u r using with js??
by the way try with parseJSON

Are the latitudes and longitudes completely random? or are they points along a path, so there is some notion of sequence?
If there is some ordering of the points in the array, perhaps a search algorithm could be faster.
For example:
if the inner array is up to 10,000 elements, test item 5000
if that value is too high, focus on 1-4999;
if too low, focus on 5001-10000, else 5000 is the right anwser
repeat until the range shrinks to the vicinity, making a straight loop through the remaining values quick enough.

After sleeping on it, it seems to me most likely that the solution to your problem lies in recasting the problem.
Since it is a requirement of the system that you are able to find a point quickly, I'd suggest that a large array is the wrong data structure to support that requirement. It maybe necessary to have an array, but perhaps there could also be another mechanism to make the search rapid.
As I understand it you are trying to locate points near a known lat-long.
What if, in addition to the array, you had a hash keyed on lat-long, with the value being an array of indexes into the huge array?
Latitude and Longitude can be expressed at different degrees of precision, such as 141.438754 or 141.4
The precision relates to the size of the grid square.
With some knowledge of the business domain, it should be possible to select a reasonably-sized grid such that several points fit inside but not too many to search.
So the hash is keyed on lat-long coords such as '141.4#23.2' with the value being a smaller array of indexes [3456,3478,4579,6344] using the indexes we can easily access the items in the large array.
Suppose we need to find 141.438754#23.2i7643 : we can reduce the precision to '141.4#23.2' and see if there is an array for that grid square.
If not, widen the search to the (3*3-1=) 8 adjacent grids (plus or minus one unit).
If not, widen to the (=5*5-9) 16 grid squares one unit away. And so on...
Depending on how the original data is stored and processed, it may be possible to generate the hash server-side, which would be preferable. If you needed to generate the hash client-side, it might be worth doing if you reused it for many searches, but would be kind of pointless if you used the data only once.
Could you comment on the possibility of recasting the problem in a different way, perhaps along these lines?

Related

3D Grid for multiple shapes

A few months ago I made a small terrain generator, like Minecraft, for a school project.
The way I did this was by using multiple chunks. Each chunk contained a 3-dimensional array that stored the blocks.
Every position in this array corresponded with the position of the block it contained.
blocks[x, y, z] = new Block();
Now I would like to add different sizes if blocks. However, I can't do that with the way I am storing the blocks right now, because bigger blocks would have to be spread over multiple positions in the 3-dimensional array.
An example of a game with different sizes of blocks (and different shapes) is LEGO Worlds. How does a game like this store all these little blocks?
I hope someone can help me with this.
The language I am using is Javascript in combination with WebGL.
Thanks in advance!
In my experience there are a few different ways of tackling an issue like this, but the one I'd recommend would depend on the amount of time you have to work on this and the scope (how big) you wanted to make this game.
Your Current Approach
At the moment I think your using what most people would consider the most straightforward approach by storing the voxels in a 3D grid
[Source].
But two problems you seem to be having is that there isn't an obvious way to create blocks that are bigger then 1x1 and that a 3D grid for a world space is fairly inefficient in terms of memory usage (As for an array you have to have memory allocated for every cell, including empty space. JavaScript is no different).
An Alternative Approach
An alternative to using a 3D array would be to instead use a different data structure, the full name being a sparse voxel octree.
This to put it simply is a tree data structure that works by subdividing an area of space until everything has been stored.
The 2D form of this where a square sub divides into four smaller quadrants is called a quad tree and likewise a 3D equivalent divides into eight quadrants, called an octree. This approach is generally preferable when possible as its much more efficient because the trees only occupy more memory when its absolutely essential and they can also be packed into a 1D array (Technically a 3D array can be too).
A common tactic used with quad/octrees in some block based games is to take a region of the same kind of voxel that fit into one larger quadrant of the tree is to simply stop sub division there, as there's no reason to go deeper if all the data is the same.
The other optimization they can make is called sparse where regions of empty space (air) are simply deleted since empty space doesn't do anything special and its location can be inferred.
[SVO Source]
[Z Order Curve Source]
Recommended Approach
Unless you have a few months to complete your game and you're at university I seriously wouldn't recommend an SVO (Though reading up about could impress any teachers you have). Instead I'd recommend taking the same approach that Minecraft appears to visibly has. E.G. A door is 1X2 but blocks can only be 1x1, then just make it two blocks.
In the example of a door you would have four unique blocks in total, two for the upper and lower half, and two variations of each being opened or closed.
E.G.
var cubeProgram; // shader program
var cubeVBO; // vertex buffer (I recommend combining vertex & UV coords)
var gl; // rendering context
// Preset list of block ID's
var BLOCK_TYPES = {
DOOR_LOWER_OPEN: 0,
DOOR_UPPER_OPEN: 1,
DOOR_LOWER_CLOSED: 2,
DOOR_UPPER_CLOSED: 3,
}
var BLOCK_MESHES = {
GENERIC_VBO: null,
DOOR_UPPER_VBO: null
DOOR_LOWER_VBO: null
}
// Declare a Door class using ES6 syntax
class Door {
// Assume X & Y are the lower half of the door
constructor(x,y,map) {
if (y - 1 > -1) {
console.error("Error: Top half of the door goes outside the map");
return;
}
this.x = x;
this.y = y;
map[x][y ] = BLOCK_TYPES.DOOR_LOWER_OPEN;
map[x][y-1] = BLOCK_TYPES.DOOR_UPPER_OPEN;
}
}

JS Canvas get pixel value very frequently

I am creating a video game based on Node.js/WebGL/Canvas/PIXI.js.
In this game, blocks have a generic size: they can be circles, polygons, or everything. So, my physical engine needs to know where exactly the things are, what pixels are walls and what pixels are not. Since I think PIXI don't allow this, I create an invisible canvas where I put all the wall's images of the map. Then, I use the function getImageData to create a function "isWall" at (x, y):
function isWall(x, y):
return canvas.getImageData(x, y, 1, 1).data[3] != 0;
However, this is very slow (it takes up to 70% of the CPU time of the game, according to Chrome profiling). Also, since I introduced this function, I sometimes got the error "Oops, WebGL crashed" without any additional advice.
Is there a better method to access the value of the pixel? I thought about storing everything in a static bit array (walls have a fixed size), with 1 corresponding to a wall and 0 to a non-wall. Is it reasonable to have a 10-million-cells array in memory?
Some thoughts:
For first check: Use collision regions for all of your objects. The regions can even be defined for each side depending on shape (ie. complex shapes). Only check for collisions inside intersecting regions.
Use half resolution for hit-test bitmaps (or even 25% if your scenario allow). Our brains are not capable of detecting pixel-accurate collisions when things are moving so this can be taken advantage of.
For complex shapes, pre-store the whole bitmap for it (based on its region(s)) but transform it to a single value typed array like Uint8Array with high and low values (re-use this instead of getting one and one pixels via the context). Subtract object's position and use the result as a delta for your shape region, then hit-testing the "bitmap". If the shape rotates, transform incoming check points accordingly (there is probably a sweet-spot here where updating bitmap becomes faster than transforming a bunch of points etc. You need to test for your scenario).
For close-to-square shaped objects do a compromise and use a simple rectangle check
For circles and ellipses use un-squared values to check distances for radius.
In some cases you can perhaps use collision predictions which you calculate before the games starts and when knowing all objects positions, directions and velocities (calculate the complete motion path, find intersections for those paths, calculate time/distance to those intersections). If your objects change direction etc. due to other events during their path, this will of course not work so well (or try and see if re-calculating is beneficial or not).
I'm sure why you would need 10m stored in memory, it's doable though - but you will need to use something like a quad-tree and split the array up, so it becomes efficient to look up a pixel state. IMO you will only need to store "bits" for the complex shapes, and you can limit it further by defining multiple regions per shape. For simpler shapes just use vectors (rectangles, radius/distance). Do performance tests often to find the right balance.
In any case - these sort of things has to be hand-optimized for the very scenario, so this is just a general take on it. Other factors will affect the approach such as high velocities, rotation, reflection etc. and it will quickly become very broad. Hope this gives some input though.
I use bit arrays to store 0 || 1 info and it works very well.
The information is stored compactly and gets/sets are very fast.
Here is the bit library I use:
https://github.com/drslump/Bits-js/blob/master/lib/Bits.js
I've not tried with 10m bits so you'll have to try it on your own dataset.
The solution you propose is very "flat", meaning each pixel must have a corresponding bit. This results in a large amount of memory being required--even if information is stored as bits.
An alternative testing data ranges instead of testing each pixel:
If the number of wall pixels is small versus the total number of pixels you might try storing each wall as a series of "runs". For example, a wall run might be stored in an object like this (warning: untested code!):
// an object containing all horizontal wall runs
var xRuns={}
// an object containing all vertical wall runs
var yRuns={}
// define a wall that runs on y=50 from x=100 to x=185
// and then runs on x=185 from y=50 to y=225
var y=50;
var x=185;
if(!xRuns[y]){ xRuns[y]=[]; }
xRuns[y].push({start:100,end:185});
if(!yRuns[x]){ yRuns[x]=[]; }
yRuns[x].push({start:50,end:225});
Then you can quickly test an [x,y] against the wall runs like this (warning untested code!):
function isWall(x,y){
if(xRuns[y]){
var a=xRuns[y];
var i=a.length;
do while(i--){
var run=a[i];
if(x>=run.start && x<=run.end){return(true);}
}
}
if(yRuns[x]){
var a=yRuns[x];
var i=a.length;
do while(i--){
var run=a[i];
if(y>=run.start && y<=run.end){return(true);}
}
}
return(false);
}
This should require very few tests because the x & y exactly specify which array of xRuns and yRuns need to be tested.
It may (or may not) be faster than testing the "flat" model because there is overhead getting to the specified element of the flat model. You'd have to perf test using both methods.
The wall-run method would likely require much less memory.
Hope this helps...Keep in mind the wall-run alternative is just off the top of my head and probably requires tweaking ;-)

How to deal with big number of data - periodic table and elements

I would like to operate with quite big number of data - with elements and periodic table.
At first let the program return the atomic weight of given element. How would you perfrom that?
By manually creating a table with 118 elements and searching for given element in tab1[element][] and then passing the tab1[][atomic_weight] by iterating up to 118 times?
Or maybe instead of creating in program table create a file with the data? Languages are C++ and JS (in browser-JS you can't deal with local files, but only with server ones by using e.g. AJAX, yes?).
Later it will have to perform more advanced calculations. Of course databases would be helpful, but without using it?
Here are your steps to make this happen:
Decide the targets you want your application to run on (The web, local machine...)
Learn C++ OR Javascript depending on #1 (Buy a book)
Come back to this question on Stack Overflow
Realize this is not a good question for Stack Overflow
Tips when you get to a point where you can answer your own question:
Use a single dimension array with Objects you have designed. This is one reason why Object Oriented Programming is so great. It can be expanded easily later.
Why the single dimension array?
118 Elements is chump change for a computer even if you went through every element. You have to touch on each one anyways, so why make it more complex than a single dimension array.
You KNOW how large the data structure will be, and it won't change
You could access elements anywhere on the table in O(1) time based on its atomic number
Groups and periods can be deduced by simple math, and therefore also deduced in constant time.
The jist:
You aren't fooling me. You have a long way to go. Learn to program first.
I recommend placing all the element information into a structure.
For example:
struct Element_Attributes
{
const char * symbol;
unsigned int weight;
};
The data structure to contain the elements varies, depending on how you want to access them.
Since there are columns and rows on the Periodic Table of the Elements, a matrix would be appropriate. There are blank areas in the matrix, so your program would have to handle the {wasted} blank space in the matrix.
Another idea is to have row and column links. The row links would point to the next element in the row. The column link would point to the next element in the column. This would have slower access time than a matrix, but would not have empty slots (links).
Also, don't worry about your program's performance unless somebody (a user) says it is too slow. Other entities are usually slower than executing loops. Some of those entities are file I/O and User I/O.
Edit 1 - Implementation
There are 33 columns to the table and 7 rows:
#define MAX_ROWS 7
#define MAX_COLUMNS 33
Element_Attributes Periodic_Table[MAX_ROWS][MAX_COLUMNS];
Manually, an element can be created and added to the table:
Element_Attributes hydrogen = {"H", 1};
Periodic_Table[0][0] = hydrogen;
The table can also be defined statically (when it is declared). This is left as an exercise for the reader.
Searching:
bool element_found = false;
for (unsigned int row = 0; row < MAX_ROWS; ++row)
{
for (unsigned int column = 0; column < MAX_COLUMNS; ++column)
{
const std::string element_symbol = Periodic_Table[row][column].symbol;
if (element_symbol == "K") // Search for Potassium
{
element_found = true;
break;
}
}
if (element_found)
{
break;
}
}

Using arraybuffers to store canvas rendering data

Canvas Performance
Lately i'm creating alot of animations in canvas, and as canvas has no events you need to create your own eventsystem based on the coordinates, in short you need a collisiondetection function.As most of the codes are very long i rewrote my own and doing so i undersood how simple it is. So i wrote some sort of a game code.
Basically canvas games are alot of temp arrays of numbers, where in most cases a number between 0 and 64,128 or 256 would be enough. Maybe reduced and used as a multipler
hitpoints=max*(number/256);
So i was thinking What if i store these values into an arraybuffer?
var array = new Float64Array(8);
array[0] = 10;
//....
Example:(i do it that way... if you know something better feel free to tell me)
//Usable ammonition
var ammo=[
['missiles',dmg,range,radius,red,green,blue,duration],
//all appart the first are numbers between 0 and 255
['projectile',dmg,range,radius,red,green,blue,duration]
]
//Flying ammonition
//this is created inside the canvas animation everytime you shoot.
var tempAmmo=[
[id,timestarted,x,y,timepassed] //these are all integers.
// the longest is probably the timestamp
]
// i could add the ammo numbers to the tempAmmo (which is faster):
[id,timestarted,x,y,timepassed,dmg,range,radius,red,green,blue,duration]
// then i do the same double array for enemies and flyingEnemies.
wouldn't it be better to store everything in arrabuffers?
What i think (correct me if i'm wrong):
arraybuffers are binary data,
they should be faster for the rendering, they should be smaller
for the memory.
Now if this two opinions are correct how do i properly create an array structure like the described one, maybe choosing the proper array Type?
Note: in my case i'm using a bidimensional array.And obiovsly i don't want to use objects.

How to determine SVG path neighbours?

I've been working with the great JVectorMap library. When the user selects a particular country on the map, I'd like to be able to determine which countries are neighbouring or sharing a border.
My searches for calculating distances between SVG paths hasn't gotten me anywhere. Can anyone suggest a good solution to determine which countries are neighbours and which aren't?
Thanks!
I think the best approach would be an array that lists a country and its bordering country names. Then use the array to filter your results. Creating the array would not be difficult since the following site lists bordering countries in a table:
Land borders
This could take a few hours, but I'm sure others would appreciate your work:)
Supposing that Country shapes are svg path elements you could check those Countries (paths) have coinciding vertices or with a distance within a given threshold.
There isn't a functionality to do this directly in jvectormaps. But what you can do is apply a and incrementing value on each horizontal or vertical strip of regions (call it something like data-adj), and you can do something like:
for( i in regions ){
if (Math.abs(this.data-adj - region[i].data-adj) <= 1){
//is adjacent so do something
}
}
This would really only work if you're looking for adjacency and not something like how many regions over a region is.
I needed to find out the same thing, so i made this: https://github.com/FnTm/country-neighbors
It mainly contains a single json file, that describes land-based neighbors for most of the world.
For most cases, it should do what you need to do.

Categories

Resources