Upsert performance decreases with a growing collection (number of documents) - javascript

Use Case:
I'm consuming a REST Api which provides battle results of a video game. It is a team vs team online game and each team consists of 3 players who can pick different one from 100 different characters. I want to count the number of wins / losses and draws for each team combination. I get roughly 1000 battle results per second. I concatenate the character ids (ascending) of each team and then I save the wins/losses and draws for each combination.
My current implementation:
const combinationStatsSchema: Schema = new Schema({
combination: { type: String, required: true, index: true },
gameType: { type: String, required: true, index: true },
wins: { type: Number, default: 0 },
draws: { type: Number, default: 0 },
losses: { type: Number, default: 0 },
totalGames: { type: Number, default: 0, index: true },
battleDate: { type: Date, index: true, required: true }
});
For each returned log I perform an upsert and send these queries in bulk (5-30 rows) to MongoDB:
const filter: any = { combination: log.teamDeck, gameType, battleDate };
if (battleType === BattleType.PvP) {
filter.arenaId = log.arena.id;
}
const update: {} = { $inc: { draws, losses, wins, totalGames: 1 } };
combiStatsBulk.find(filter).upsert().updateOne(update);
My problem:
As long as I just have a few thousand entries in my collection combinationStats mongodb takes just 0-2% cpu. Once the collection has a couple million documents (which happens pretty quickly due to the amount of possible combinations) MongoDB constantly takes 50-100% cpu. Apparently my approach is not scalable at all.
My question:
Either of these options could be a solution to my above defined problem:
Can I optimize the performance of my MongoDB solution described above so that it doesn't take that much CPU? (I already indexed the fields I filter on and I perform upserts in bulk). Would it help to create a hash (based on all my filter fields) which I could use for filtering the data then to improve performance?
Is there a better database / technology suited to aggregate such data? I could imagine a couple more use cases where I want/need to increment a counter for a given identifier.
Edit: After khang commented that it might be related to the upsert performance I replaced my $inc with a $set and indeed the performance was equally "poor". Hence I tried the suggested find() and then manually update() approach but the results didn't become any better.

Create a hash on your filter conditions:
I was able to reduce the CPU from 80-90% down to 1-5% and experienced a higher throughoutput.
Apparently the filter was the problem. Instead of filtering on these three conditions: { combination: log.teamDeck, gameType, battleDate } I created a 128bit hash in my node application. I used this hash for upserting and set the combination, gameType and battleDate as additional fields in my update Document.
For creating the hash I used the metrohash library, which can be found here: https://github.com/jandrewrogers/MetroHash . Unfortunately I cannot explain why the performance is so much better, especially since I indexed all my previous conditions.

In (1.) you assert that you perform upserts in bulk. But based on how this seems to scale, you're probably sending too few rows into each batch. Consider doubling the batch size each time there's a doubling of stored rows. Please do post mongo's explain() query plan for your setup.
In (2.) you consider switching to, say, mysql or postgres. Yes, that would absolutely be a valid experiment. Again, be sure to post EXPLAIN output alongside your timing data.
There's only a million possible team compositions, and there's a distribution over those, with some being much more popular than others. You only need to maintain a million counters, which is not such a large number. However, doing 1e6 disk I/O's can take a while, especially if they are random reads. Consider moving away from a disk resident data structure, to which you might be doing frequent COMMITs, and switching to a memory resident hash or b-tree. It doesn't sound like ACID type of persistence guarantees are important to your application.
Also, once you have assembled "large" input batches, certainly more than a thousand and perhaps on the order of a million, do take care to sort the batch before processing. Then your counter maintenance problem just looks like merge-sort, either on internal memory or on external storage.
One principled approach to scaling your batches is to accumulate observations in some conveniently sized sorted memory buffer, and only release aggregate (counted) observations from that pipeline stage when number of distinct team compositions in the buffer is above some threshold K. Mongo or whatever would be the next stage in your pipeline. If K is much more than 1% of 1e6, then even a sequential scan of counters stored on disk would have a decent chance of finding useful update work to do on each disk block that is read.

Related

JavaScript array performance drop at 13k-16k elements

I am doing some performance testing regarding creating and editing performance of arrays and noticed that there are some weird characteristics around arrays with about 13k-16k elements.
The below graphs show the time per element it takes to create an array and read from it (in this case summing the numbers in it). capacity and push relate to the way the array was created:
capacity: const arr = new Array(length) and then arr[i] = data
push: const arr = []; and then arr.push(data)
As you can see, in both cases, the creation of an array and reading from it, there is a performance reduction of about 2-3 times compared to the performance per element at 1k less elements.
When creating an array using the push method, this jump happens a bit earlier compared to creating it with the correct capacity beforehand. I assume that this is happening because, when pushing to an array that is already at max capacity, more than the actually needed extra capacity is added (to avoid having to add new capacity again soon) and the threshold for the slower performance path is hit earlier.
If you want to see the code or test it yourself: github
My question, why is the performance dropping at around 13k-16k?
To me it seems that larger arrays are treated differently in v8 starting at around 13k-16k elements to improve performance for them, but the cut-off point (at least in my code) is slightly too early so the performance drops before the optimizations bring any benefit.
You can see the performance improvements going down after about 500 elements and picking up again after the drop.
Sadly I can't find any information about that.
Also, if you happen to have any idea why there are those spikes at the end of creating with capacity and summing with push, feel free to let me know :)
Edit:
As suggested by #ggorlen I run the same test on a different machine to rule out caching as the reason for the seen behavior (using a different, weaker CPU and less RAM). The results seem very similar.
Edit:
I ran node using the --allow-natives-syntax flag to debug log the created arrays with %DebugPrint(array);, hoping to see a difference between the different array lengths, but besides the length and memory adresses they all look the same. Here is one as an example:
// For array created with capacity
DebugPrint: 000002CB8E1ACE19: [JSArray]
- map: 0x035206283321 <Map(HOLEY_SMI_ELEMENTS)> [FastProperties]
- prototype: 0x036b86245b19 <JSArray[0]>
- elements: 0x02cb8e1ace39 <FixedArray[1]> [HOLEY_SMI_ELEMENTS]
- length: 1
- properties: 0x0114c5d01309 <FixedArray[0]>
- All own properties (excluding elements): {
00000114C5D04D41: [String] in ReadOnlySpace: #length: 0x03f907ac1189 <AccessorInfo> (const accessor descriptor), location: descriptor
}
0000035206283321: [Map]
- type: JS_ARRAY_TYPE
- instance size: 32
- inobject properties: 0
- elements kind: HOLEY_SMI_ELEMENTS
- unused property fields: 0
- enum length: invalid
- back pointer: 0x035206283369 <Map(PACKED_SMI_ELEMENTS)>
- prototype_validity cell: 0x03f907ac15e9 <Cell value= 1>
- instance descriptors #1: 0x009994a6aa31 <DescriptorArray[1]>
- transitions #1: 0x009994a6a9d1 <TransitionArray[4]>Transition array #1:
0x0114c5d05949 <Symbol: (elements_transition_symbol)>: (transition to PACKED_DOUBLE_ELEMENTS) -> 0x0352062832d9 <Map(PACKED_DOUBLE_ELEMENTS)>
- prototype: 0x036b86245b19 <JSArray[0]>
- constructor: 0x031474c124e9 <JSFunction Array (sfi = 000003CECD93C3A9)>
- dependent code: 0x0114c5d01239 <Other heap object (WEAK_FIXED_ARRAY_TYPE)>
- construction counter: 0
// For array created with push
DebugPrint: 000003B09882CE19: [JSArray]
- map: 0x02ff94f83369 <Map(PACKED_SMI_ELEMENTS)> [FastProperties]
- prototype: 0x0329b3805b19 <JSArray[0]>
- elements: 0x03b09882ce39 <FixedArray[17]> [PACKED_SMI_ELEMENTS]
- length: 1
- properties: 0x03167aa81309 <FixedArray[0]>
- All own properties (excluding elements): {
000003167AA84D41: [String] in ReadOnlySpace: #length: 0x02094f941189 <AccessorInfo> (const accessor descriptor), location: descriptor
}
000002FF94F83369: [Map]
- type: JS_ARRAY_TYPE
- instance size: 32
- inobject properties: 0
- elements kind: PACKED_SMI_ELEMENTS
- unused property fields: 0
- enum length: invalid
- back pointer: 0x03167aa81599 <undefined>
- prototype_validity cell: 0x02094f9415e9 <Cell value= 1>
- instance descriptors #1: 0x00d25122aa31 <DescriptorArray[1]>
- transitions #1: 0x00d25122aa01 <TransitionArray[4]>Transition array #1:
0x03167aa85949 <Symbol: (elements_transition_symbol)>: (transition to HOLEY_SMI_ELEMENTS) -> 0x02ff94f83321 <Map(HOLEY_SMI_ELEMENTS)>
- prototype: 0x0329b3805b19 <JSArray[0]>
- constructor: 0x009ff8a524e9 <JSFunction Array (sfi = 0000025A84ABC3A9)>
- dependent code: 0x03167aa81239 <Other heap object (WEAK_FIXED_ARRAY_TYPE)>
- construction counter: 0
Edit
The performance drop for the summing happens when going from an array of size 13_994 to 13_995:
(V8 developer here.)
TL;DR: Nothing to see here, just microbenchmarks producing confusing artifacts.
There are two separate effects here:
What happens at 16384 elements is that the backing store is allocated in "large object space", a special region of the heap that's optimized for large objects. In Chrome, where pointer compression is enabled, this happens at exactly twice as many elements as in Node, where pointer compression is off. It has the consequence that the allocation itself can no longer happen as an inlined sequence of instructions directly in optimized code; instead it's a call to a C++ function; which aside from having some call overhead is also a more generic implementation and as such a bit slower (there might be some optimization potential, not sure). So it's not an optimization that kicks in too early; it's just a cost that's paid by huge objects. And it'll only show up prominently in (tiny microbenchmark?) Cases that allocate many large arrays and then don't do much with them.
What happens at 13995 elements is that for a the specific function sum, that's when the optimizing compiler kicks in to "OSR" (on-stack replace) the function, i.e. the function is replaced with optimized code while it is running. That's a certain one-off cost no matter when it happens, and it will pay for itself shortly after. So perceiving this as a specific hit at a specific time is a typical microbenchmarking artifact that's irrelevant in real-world usage. (For instance, if you ran the test multiple times in the same process, you wouldn't see a step at 13995 any more. If you ran it with multiple sizes, then chances are OSR wouldn't be needed (as the function could switch to optimized code the next time it's called) and this one-off cost wouldn't happen at all.)

Random IDs in JavaScript

I'm generating random IDs in javascript which serve as unique message identifiers for an analytics suite.
When checking the data (more than 10MM records), there are some minor collisions for some IDs for various reasons (network retries, robots faking data etc), but there is one in particular which has an intriguing number of collisions: akizow-dsrmr3-wicjw1-3jseuy.
The collision rate for the above id is at around 0.0037% while the rate for the other id collisions is under 0.00035% (10 times less) out of a sample of 111MM records from the same day. While the other ids are varying from day to day, this one remains the same, so for a longer period the difference is likely larger than 10x.
This is how the distribution of the top ID collisions looks like
This is the algorithm used to generate the random IDs:
function generateUUID() {
return [
generateUUID4(), generateUUID4(), generateUUID4(), generateUUID4()
].join("-");
}
function generateUUID4() {
return Math.abs(Math.random() * 0xFFFFFFFF | 0).toString(36);
}
I reversed the algorithm and it seems like for akizow-dsrmr3-wicjw1-3jseuy the browser's Math.random() is returning the following four numbers in this order: 0.1488114111471948, 0.19426893796638328, 0.45768366415465334, 0.0499740378116197, but I don't see anything special about them. Also, from the other data I collected it seems to appear especially after a redirect/preload (e.g. google results, ad clicks etc).
So I have 3 hypotheses:
There's a statistical problem with the algorithm that causes this specific collision
Redirects/preloads are somehow messing with the seed of the pseudo-random generator
A robot is smart enough that it fakes all the other data but for some reason is keeping the random id the same. The data comes from different user agents, IPs, countries etc.
Any idea what could cause this collision?

Leaderboard ranking with Firebase

I have project that I need to display a leaderboard of the top 20, and if the user not in the leaderboard they will appear in the 21st place with their current ranking.
Is there efficient way to this?
I am using Cloud Firestore as a database. I believe it was mistake to choose it instead of MongoDB but I am in the middle of the project so I must do it with Cloud Firestore.
The app will be use by 30K users. Is there any way to do it without getting all the 30k users?
this.authProvider.afs.collection('profiles', ref => ref.where('status', '==', 1)
.where('point', '>', 0)
.orderBy('point', 'desc').limit(20))
This is code I did to get the top 20 but what will be the best practice for getting current logged in user rank if they are not in the top 20?
Finding an arbitrary player's rank in leaderboard, in a manner that scales is a common hard problem with databases.
There are a few factors that will drive the solution you'll need to pick, such as:
Total Number players
Rate that individual players add scores
Rate that new scores are added (concurrent players * above)
Score range: Bounded or Unbounded
Score distribution (uniform, or are their 'hot scores')
Simplistic approach
The typical simplistic approach is to count all players with a higher score, eg SELECT count(id) FROM players WHERE score > {playerScore}.
This method works at low scale, but as your player base grows, it quickly becomes both slow and resource expensive (both in MongoDB and Cloud Firestore).
Cloud Firestore doesn't natively support count as it's a non-scalable operation. You'll need to implement it on the client-side by simply counting the returned documents. Alternatively, you could use Cloud Functions for Firebase to do the aggregation on the server-side to avoid the extra bandwidth of returning documents.
Periodic Update
Rather than giving them a live ranking, change it to only updating every so often, such as every hour. For example, if you look at Stack Overflow's rankings, they are only updated daily.
For this approach, you could schedule a function, or schedule App Engine if it takes longer than 540 seconds to run. The function would write out the player list as in a ladder collection with a new rank field populated with the players rank. When a player views the ladder now, you can easily get the top X + the players own rank in O(X) time.
Better yet, you could further optimize and explicitly write out the top X as a single document as well, so to retrieve the ladder you only need to read 2 documents, top-X & player, saving on money and making it faster.
This approach would really work for any number of players and any write rate since it's done out of band. You might need to adjust the frequency though as you grow depending on your willingness to pay. 30K players each hour would be $0.072 per hour($1.73 per day) unless you did optimizations (e.g, ignore all 0 score players since you know they are tied last).
Inverted Index
In this method, we'll create somewhat of an inverted index. This method works if there is a bounded score range that is significantly smaller want the number of players (e.g, 0-999 scores vs 30K players). It could also work for an unbounded score range where the number of unique scores was still significantly smaller than the number of players.
Using a separate collection called 'scores', you have a document for each individual score (non-existent if no-one has that score) with a field called player_count.
When a player gets a new total score, you'll do 1-2 writes in the scores collection. One write is to +1 to player_count for their new score and if it isn't their first time -1 to their old score. This approach works for both "Your latest score is your current score" and "Your highest score is your current score" style ladders.
Finding out a player's exact rank is as easy as something like SELECT sum(player_count)+1 FROM scores WHERE score > {playerScore}.
Since Cloud Firestore doesn't support sum(), you'd do the above but sum on the client side. The +1 is because the sum is the number of players above you, so adding 1 gives you that player's rank.
Using this approach, you'll need to read a maximum of 999 documents, averaging 500ish to get a players rank, although in practice this will be less if you delete scores that have zero players.
Write rate of new scores is important to understand as you'll only be able to update an individual score once every 2 seconds* on average, which for a perfectly distributed score range from 0-999 would mean 500 new scores/second**. You can increase this by using distributed counters for each score.
* Only 1 new score per 2 seconds since each score generates 2 writes
** Assuming average game time of 2 minute, 500 new scores/second could support 60000 concurrent players without distributed counters. If you're using a "Highest score is your current score" this will be much higher in practice.
Sharded N-ary Tree
This is by far the hardest approach, but could allow you to have both faster and real-time ranking positions for all players. It can be thought of as a read-optimized version of of the Inverted Index approach above, whereas the Inverted Index approach above is a write optimized version of this.
You can follow this related article for 'Fast and Reliable Ranking in Datastore' on a general approach that is applicable. For this approach, you'll want to have a bounded score (it's possible with unbounded, but will require changes from the below).
I wouldn't recommend this approach as you'll need to do distributed counters for the top level nodes for any ladder with semi-frequent updates, which would likely negate the read-time benefits.
Final thoughts
Depending on how often you display the leaderboard for players, you could combine approaches to optimize this a lot more.
Combining 'Inverted Index' with 'Periodic Update' at a shorter time frame can give you O(1) ranking access for all players.
As long as over all players the leaderboard is viewed > 4 times over the duration of the 'Periodic Update' you'll save money and have a faster leaderboard.
Essentially each period, say 5-15 minutes you read all documents from scores in descending order. Using this, keep a running total of players_count. Re-write each score into a new collection called scores_ranking with a new field players_above. This new field contains the running total excluding the current scores player_count.
To get a player's rank, all you need to do now is read the document of the player's score from score_ranking -> Their rank is players_above + 1.
One solution not mentioned here which I'm about to implement in my online game and may be usable in your use case, is to estimate the user's rank if they're not in any visible leaderboard because frankly the user isn't going to know (or care?) whether they're ranked 22,882nd or 22,838th.
If 20th place has a score of 250 points and there are 32,000 players total, then each point below 250 is worth on average 127 places, though you may want to use some sort of curve so as they move up a point toward bottom of the visible leaderboard they don't jump exactly 127 places each time - most of the jumps in rank should be closer to zero points.
It's up to you whether you want to identify this estimated ranking as an estimation or not, and you could add some a random salt to the number so it looks authentic:
// Real rank: 22,838
// Display to user:
player rank: ~22.8k // rounded
player rank: 22,882nd // rounded with random salt of 44
I'll be doing the latter.
Alternative perspective - NoSQL and document stores make this type of task overly complex. If you used Postgres this is pretty simple using a count function. Firebase is tempting because it's easy to get going with but use cases like this are when relational databases shine. Supabase is worth a look https://supabase.io/ similar to firebase so you can get going quickly with a backend but its opensource and built on Postgres so you get a relational database.
A solution that hasn't been mentioned by Dan is the use of security rules combined with Google Cloud Functions.
Create the highscore's map. Example:
highScores (top20)
Then:
Give the users write/read access to highScores.
Give the document/map highScores the smallest score in a property.
Let the users only write to highScores if his score > smallest score.
Create a write trigger in Google Cloud Functions that will activate when a new highScore is written. In that function, delete the smallest score.
This looks to me the easiest option. It is realtime as well.
You could do something with cloud storage. So manually have a file that has all the users' scores (in order), and then you just read that file and find the position of the score in that file.
Then to write to the file, you could set up a CRON job to periodically add all documents with a flag isWrittenToFile false, add them all to the file (and mark them as true). That way you won't eat up your writes. And reading a file every time the user wants to view their position is probably not that intensive. It could be done from a cloud function.
2022 Updated and Working Answer
To solve the problem of having a leaderboards with user and points, and to know your position in this leaderboards in an less problematic way, I have this solution:
1) You should create your Firestorage Document like this
In my case, I have a document perMission that has for each user a field, with the userId as property and the respective leaderboard points as value.
It will be easier to update the values inside my Javascript code.
For example, whenever an user completed a mission (update it's points):
import { doc, setDoc, increment } from "firebase/firestore";
const docRef = doc(db, 'leaderboards', 'perMission');
setDoc(docRef, { [userId]: increment(1) }, { merge: true });
The increment value can be as you want. In my case I run this code every time the user completes a mission, increasing the value.
2) To get the position inside the leaderboards
So here in your client side, to get your position, you have to order the values and then loop through them to get your position inside the leaderboards.
Here you can also use the object to get all the users and its respective points, ordered. But here I am not doing this, I am only interested in my position.
The code is commented explaining each block.
// Values coming from the database.
const leaderboards = {
userId1: 1,
userId2: 20,
userId3: 30,
userId4: 12,
userId5: 40,
userId6: 2
};
// Values coming from your user.
const myUser = "userId4";
const myPoints = leaderboards[myUser];
// Sort the values in decrescent mode.
const sortedLeaderboardsPoints = Object.values(leaderboards).sort(
(a, b) => b - a
);
// To get your specific position
const myPosition = sortedLeaderboardsPoints.reduce(
(previous, current, index) => {
if (myPoints === current) {
return index + 1 + previous;
}
return previous;
},
0
);
// Will print: [40, 30, 20, 12, 2, 1]
console.log(sortedLeaderboardsPoints);
// Will print: 4
console.log(myPosition);
You can now use your position, even if the array is super big, the logic is running in the client side. So be careful with that. You can also improve the client side code, to reduce the array, limit it, etc.
But be aware that you should do the rest of the code in your client side, and not Firebase side.
This answer is mainly to show you how to store and use the database in a "good way".

Can reducing index length in Javascript associative array save memory

I am trying to build a large Array (22,000 elements) of Associative Array elements in JavaScript. Do I need to worry about the length of the indices with regards to memory usage?
In other words, which of the following options saves memory? or are they the same in memory consumption?
Option 1:
var student = new Array();
for (i=0; i<22000; i++)
student[i] = {
"studentName": token[0],
"studentMarks": token[1],
"studentDOB": token[2]
};
Option 2:
var student = new Array();
for (i=0; i<22000; i++)
student[i] = {
"n": token[0],
"m": token[1],
"d": token[2]
};
I tried to test this on Google Chrome DevTools, but the numbers are inconsistent to make a decision. My best guess is that because the Array indices repeat, the browser can optimize memory usage by not repeating them for each student[i], but that is just a guess.
Edit:
To clarify, the problem is the following: a large array containing many small associative arrays. Does it matter using long index or short index when it comes to memory requirements.
Edit 2:
The 3N array approach that was suggested in the comments and #Joseph Myers is referring to is creating one array 'var student = []', with a size 3*22000, and then using student[0] for name, student[1] for marks, student[2] for DOB, etc.
Thanks.
The difference is insignificant, so the answer is no. This sort of thing would barely even fall under micro optimization. You should always opt for most readable solutions when in such dilemmas. The cost of maintaining code from your second option outweighs any (if any) performance gain you could get from it.
What you should do though is use the literal for creating an array.
[] instead of new Array(). (just a side note)
A better approach to solve your problem would probably be to find a way to load the data in parts, implementing some kind of pagination (I assume you're not doing heavy computations on the client).
The main analysis of associative arrays' computational cost has to do with performance degradation as the number of elements stored increases, but there are some results available about performance loss as the key length increases.
In Algorithms in C by Sedgewick, it is noted that for some key-based storage systems the search cost does not grow with the key length, and for others it does. All of the comparison-based search methods depend on key length--if two keys differ only in their rightmost bit, then comparing them requires time proportional to their length. Hash-based methods always require time proportional to the key length (in order to compute the hash function).
Of course, the key takes up storage space within the original code and/or at least temporarily in the execution of the script.
The kind of storage used for JavaScript may vary for different browsers, but in a resource-constrained environment, using smaller keys would have an advantage, like still too small of an advantage to notice, but surely there are some cases when the advantage would be worthwhile.
P.S. My library just got in two new books that I ordered in December about the latest computational algorithms, and I can check them tomorrow to see if there are any new results about key length impacting the performance of associative arrays / JS objects.
Update: Keys like studentName take 2% longer on a Nexus 7 and 4% longer on an iPhone 5. This is negligible to me. I averaged 500 runs of creating a 30,000-element array with each element containing an object { a: i, b: 6, c: 'seven' } vs. 500 runs using an object { studentName: i, studentMarks: 6, studentDOB: 'seven' }. On a desktop computer, the program still runs so fast that the processor's frequency / number of interrupts, etc., produce varying results and the entire program finishes almost instantly. Once every few runs, the big key size actually goes faster (because other variations in the testing environment affect the result more than 2-4%, since the JavaScript timer is based on clock time rather than CPU time.) You can try it yourself here: http://dropoff.us/private/1372219707-1-test-small-objects-key-size.html
Your 3N array approach (using array[0], array[1], and array[2] for the contents of the first object; and array[3], array[4], and array[5] for the second object, etc.) works much faster than any object method. It's five times faster than the small object method and five times faster plus 2-4% than the big object method on a desktop, and it is 11 times faster on a Nexus 7.

Find in Multidiamentional Array

I have an multi dimensional array as
[
{"EventDate":"20110421221932","LONGITUDE":"-75.61481666666670","LATITUDE":"38.35916666666670","BothConnectionsDown":false},
{"EventDate":"20110421222228","LONGITUDE":"-75.61456666666670","LATITUDE":"38.35946666666670","BothConnectionsDown":false}
]
Is there any plugin available to search for combination of LONGITUDE,LATITUDE?
Thanks in advance
for (var i in VehCommLost) {
var item = VehCommLost[i];
if (item.LONGITUDE == 1 && item.LATITUDE == 2) {
//gotcha
break;
}
}
this is json string..which programming language u r using with js??
by the way try with parseJSON
Are the latitudes and longitudes completely random? or are they points along a path, so there is some notion of sequence?
If there is some ordering of the points in the array, perhaps a search algorithm could be faster.
For example:
if the inner array is up to 10,000 elements, test item 5000
if that value is too high, focus on 1-4999;
if too low, focus on 5001-10000, else 5000 is the right anwser
repeat until the range shrinks to the vicinity, making a straight loop through the remaining values quick enough.
After sleeping on it, it seems to me most likely that the solution to your problem lies in recasting the problem.
Since it is a requirement of the system that you are able to find a point quickly, I'd suggest that a large array is the wrong data structure to support that requirement. It maybe necessary to have an array, but perhaps there could also be another mechanism to make the search rapid.
As I understand it you are trying to locate points near a known lat-long.
What if, in addition to the array, you had a hash keyed on lat-long, with the value being an array of indexes into the huge array?
Latitude and Longitude can be expressed at different degrees of precision, such as 141.438754 or 141.4
The precision relates to the size of the grid square.
With some knowledge of the business domain, it should be possible to select a reasonably-sized grid such that several points fit inside but not too many to search.
So the hash is keyed on lat-long coords such as '141.4#23.2' with the value being a smaller array of indexes [3456,3478,4579,6344] using the indexes we can easily access the items in the large array.
Suppose we need to find 141.438754#23.2i7643 : we can reduce the precision to '141.4#23.2' and see if there is an array for that grid square.
If not, widen the search to the (3*3-1=) 8 adjacent grids (plus or minus one unit).
If not, widen to the (=5*5-9) 16 grid squares one unit away. And so on...
Depending on how the original data is stored and processed, it may be possible to generate the hash server-side, which would be preferable. If you needed to generate the hash client-side, it might be worth doing if you reused it for many searches, but would be kind of pointless if you used the data only once.
Could you comment on the possibility of recasting the problem in a different way, perhaps along these lines?

Categories

Resources