I'm using Meteor to return a list of Venues that are closest to the user's geolocation. I have the sort happening correctly from the server but the data is a little jumbled by the time it hits the client. It's my understanding that another sort needs to happen on the client once the data is received.
I have the following code in publish.js on the server:
Meteor.publish('nearestVenues', function(params){
var limit = !!params ? params.limit : 50;
if (!!params && !!params.coordinates){
return Venues.find(
{ 'location.coordinates':
{ $near :
{ $geometry :
{ type : "Point" ,
coordinates : params.coordinates
},
$maxDistance : 6000,
spherical: true
}
}
}, {limit: limit});
} else {
return Venues.find({}, {limit: limit});
}
});
And the following in a template helper for my view which returns nothing:
Template.VenueList.helpers({
venues: function(){
return Venues.find(
{ 'location.coordinates':
{ $near :
{ $geometry :
{ type : "Point" ,
coordinates : Session.get('currentUserCoords')
},
$maxDistance : 6000,
spherical: true
}
}
}, {limit: 10})
// return Venues.find({}, {limit: 5, sort: {_id: -1, createdAt: -1}});
}
EDIT: Removed extraneous params ? !!params : 50; code from beginning of publish statement.
Note: the commented out code at the bottom of the helper does in fact work so I know this is the correct place to do a client-side sort. So, how do I do a client side sort when the information is sorted by a Mongo geospatial method? There has to be a way to sort geolocation data from closest to farthest from a location— what am I missing here?
This might be a red herring but I notice that the third line in your first code snippet doesn't seem to do anything:
params ? !!params : 50;
What is that line supposed to do? Perhaps if that is fixed that will solve the problem?
Related
does anybody know how to filter mongodb db.adminCommand output? Because if I run this command db.adminCommand({ "currentOp": true, "op" : "query", "planSummary": "COLLSCAN" }) I get a huge JSON output but I'm only interested in some fields ( like secs_running, op, command, $db)
Many thanks!
You can add the filters straight to the command object like the following:
var commandObj = {
"currentOp" : 1,
"waitingForLock" : true,
"$or" : [
{
"op" : {
"$in" : [
"insert",
"update",
"remove"
]
}
},
{
"command.findandmodify" : {
"$exists" : true
}
}
]
};
db.adminCommand(commandObj);
You can see some filter examples on the MongoDB docs: https://docs.mongodb.com/manual/reference/method/db.currentOp/#examples
Just re-read your question and I think you might of meant just projecting fields back from the database that you care about? if that's the case you can just execute a map on top of the current results so you only see what you care about?
db.adminCommand(commandObj).inprog.map(x => x.opid};
I have a problem with update method which returns this object when I run my endpoint
{ n: 1, nModified: 1, ok: 1 }
This is the code which I tried, and I tried with { new: true } but that doesnt help, i want to get updated data back.
router.put('/:username/experience/edit/:id', function(req, res) {
const { title, company, location, from, to, workingNow, description } = req.body;
User
.update({'experience._id': req.params.id},
{'$set': {
'experience.$.title': title,
'experience.$.company': company,
'experience.$.location': location,
'experience.$.from': from,
'experience.$.to': to,
'experience.$.workingNow': workingNow,
'experience.$.description': description,
}},
function(err, model) {
console.log(model);
if(err){
return res.send(err);
}
return res.json(model);
});
})
If you are on MongoDB 3.0 or newer, you need to use the .findOneAndUpdate() and use projection option to specify the subset of fields to return. You also need to set returnNewDocument to true. Of course you need to use the $elemMatch projection operator here because you cannot use a positional projection and return the new document.
As someone pointed out:
You should be using .findOneAndUpdate() because .findAndModify() is highlighed as deprecated in every official language driver. The other thing is that the syntax and options are pretty consistent across drivers for .findOneAndUpdate(). With .findAndModify(), most drivers don't use the same single object with "query/update/fields" keys. So it's a bit less confusing when someone applies to another language to be consistent. Standardized API changes for .findOneAndUpdate() actually correspond to server release 3.x rather than 3.2.x. The full distinction being that the shell methods actually lagged behind the other drivers ( for once ! ) in implementing the method. So most drivers actually had a major release bump corresponding with the 3.x release with such changes.
db.collection.findOneAndUpdate(
{
"_id": ObjectId("56d6a7292c06e85687f44541"),
"rankings._id" : ObjectId("46d6a7292c06e85687f55543")
},
{ $inc : { "rankings.$.score" : 1 } },
{
"projection": {
"rankings": {
"$elemMatch": { "_id" : ObjectId("46d6a7292c06e85687f55543") }
}
},
"returnNewDocument": true
}
)
From MongoDB 3.0 onwards, you need to use findAndModify and the fields options also you need to set new to true in other to return the new value.
db.collection.findAndModify({
query: {
"_id": ObjectId("56d6a7292c06e85687f44541"),
"rankings._id" : ObjectId("46d6a7292c06e85687f55543")
},
update: { $inc : { "rankings.$.score" : 1 } },
new: true,
fields: {
"rankings": {
"$elemMatch": { "_id" : ObjectId("46d6a7292c06e85687f55543") }
}
}
})
Both queries yield:
{
"_id" : ObjectId("56d6a7292c06e85687f44541"),
"rankings" : [
{
"_id" : ObjectId("46d6a7292c06e85687f55543"),
"name" : "Ranking 2",
"score" : 11
}
]
}
I have been troubleshooting why a MongoDB view I created is so slow. The view targets the transactions collection, and returns records that have an openBalance that is greater than 0. I also run some additional aggregation stages to shape the data the way I want it.
In order to speed up the execution of the view it makes use of an index on the targeted collection by matching on the indexed field in stage one of the view's aggregation pipeline, like so:
// View Stage 1
{ "transactions.details.openBalance" : { "$exists" : true, "$gt" : 0.0 } }
After much investigation I have determined that the aggregation from the view returns data very quickly. What's slow is the count that's run as part of the endpoint:
let count = await db.collection('view_transactions_report').find().count();
So what I'm trying to figure out now is why the count is so much slower on the view than on the underlying collection, and what I can do to speed it up. Or, perhaps there's an alternative way to generate the count?
The underlying collection has something like 800,000 records, but the count returns quickly. But the count on the view, which only returns a filtered set of 10,000 of those initial 800,000 records, returns much more slowly. In terms of specifics, I'm talking about 3/4 of a second for the count on the collection to return, verses six seconds for the count on the mongo view to return.
So, first off, why is the count so much slower on the view (with it's much smaller data set) than on the underlying collection, and secondly, what can I do to address the speed of the count for the view?
I have a couple other aggregation queries I'm running, to determine totalCustomers and totalOpenBalance, that also seem to run slow (see code below).
The relevant part of my endpoint function code looks like this:
// previous code
let count = await db.collection('view_transaction_report').find(search).count();
let totalCustomers = await db.collection('view_transaction_report').find(search).count({
$sum: "customer._id"
});
let result = {};
if (totalCustomers > 0) {
result = await db.collection('view_transaction_report').aggregate([{
$match: search,
},
{
$group: {
_id: null,
totalOpenBalance: {
$sum: '$lastTransaction.details.openBalance'
}
}
}
]).next();
}
db.collection('view_transaction_report').find(search).skip(skip).limit(pagesize).forEach(function (doc) {
docs.push(doc);
}, function (err) {
if (err) {
if (!ioOnly) {
return next(err);
} else {
return res(err);
}
}
if (ioOnly) {
res({
sessionId: sessID,
count: count,
data: docs,
totalCustomers: totalCustomers,
totalOpenBalance: result.totalOpenBalance
});
} else {
res.send({
count: count,
data: docs,
totalCustomers: totalCustomers,
totalOpenBalance: result.totalOpenBalance
});
}
});
In terms of executionStats, this is what shows for the queryPlanner section of the generated view:
"queryPlanner" : {
"plannerVersion" : 1.0,
"namespace" : "vio.transactions",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [
{
"transactions.details.openBalance" : {
"$gt" : 0.0
}
},
{
"transactions.destails.openBalance" : {
"$exists" : true
}
}
]
},
"winningPlan" : {
"stage" : "CACHED_PLAN",
"inputStage" : {
"stage" : "FETCH",
"filter" : {
"transactions.details.openBalance" : {
"$exists" : true
}
},
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"transactions.details.openBalance" : 1.0
},
"indexName" : "openBalance",
"isMultiKey" : true,
"multiKeyPaths" : {
"transactions.details.openBalance" : [
"transactions",
"transactions.details"
]
},
"isUnique" : false,
"isSparse" : true,
"isPartial" : false,
"indexVersion" : 2.0,
"direction" : "forward",
"indexBounds" : {
"transactions.details.openBalance" : [
"(0.0, inf.0]"
]
}
}
}
},
"rejectedPlans" : [
]
}
In the comments, #Wan Bachtiar mentioned that "openBalance" looks to be a multikey index. To clarify, yes, in the targeted collection, the "openBalance" field is an embedded field within an array. This is the case even though, in the view, the data is shaped in such a way that "openBalance" is an embedded field that is not within an array.
The multikey index on the targeted collection is where the issue lies, because instead of a 1 for 1 document situation, Mongo needs to look through every array element pertaining to this "openBalance" field, which, logically, dramatically increases the scan time - because sometimes there are many, many array elements pertaining to this particular field.
After some further checking, I realized I can address this issue by changing how I populate "openBalance" to our mongo collection via the ETL. By making this change I'll be able to make "openBalance" a standard index, rather than a multikey index, which in turn will allow mongo to search a much smaller data set in order to return my counts.
My firebase data looks like this:
{
"lambeosaurus": {
"vacationDates" : "2016-12-20 - 2016-12-25",
"length" : 12.5,
"weight": 5000
},
"stegosaurus": {
"vacationDates" : "2016-12-10 - 2016-12-20",
"length" : 9,
"weight" : 2500
}
}
How do i query for all dinosaurs that will be away on vacation on 2016-12-20 (i.e it should return both lambeosaurus and stegosaurus)? Or should I actually store the data differently? If so, how should I store the data for optimum performance? Thanks.
Combining the two dates doesn't do much in terms of making the database easier to query.
If you are searching for dinosaurs on holiday on a particular date, they could have gone on holiday anytime before that date. (Unless there is a policy that mandates a maximum number of days in a holiday.) So the end date is probably what you want to query:
{
"lambeosaurus": {
"vacationStart" : "2016-12-20",
"vacationEnd" : "2016-12-25",
"length" : 12.5,
"weight": 5000
},
"stegosaurus": {
"vacationStart" : "2016-12-10",
"vacationEnd" : "2016-12-20",
"length" : 9,
"weight" : 2500
}
}
Any dinosaurs with a vacationEnd on or after 2016-12-20 will be on holiday on that date if vacationStart is on or before 2016-12-20:
function getHolidayingDinosaurs(day) {
return firebase.database()
.ref("dinousaurs")
.orderByChild("vacationEnd")
.startAt(day)
.once("value")
.then((snapshot) => {
let holidaying = [];
snapshot.forEach((child) => {
let val = child.val();
if (val.vacationStart <= day) {
holidaying.push(val);
}
});
return holidaying;
});
}
getHolidayingDinosaurs("2016-12-20").then((holidaying) => console.log(holidaying));
There isn't a straight-forward alternative to performing further filtering on the client, as you can only query Firebase using a single property and the combined dates aren't particularly useful as the start and end could be any dates on or before/after the query date.
Anyway, querying using vacationEnd is likely to perform most of the filtering on the server - unless you have a lot of dinosaurs that plan their holidays well in advance.
If the above approach results in too much information being retrieved and filtered on the client, you could put in some extra effort and could maintain your own mapping of holidaying dinosaurs by storing some additional data structured like this:
"holidays": {
...
"2016-12-19": {
"stegosaurus": true
},
"2016-12-20": {
"lambeosaurus": true,
"stegosaurus": true
},
"2016-12-21": {
"lambeosaurus": true
},
...
}
Firebase's multi-location updates can be used to make maintaining the mapping a little easier (there are more multi-location examples in this answer):
firebase.database().ref().update.({
"dinosaurs/lambeosaurus": {
"vacationStart" : "2016-12-20",
"vacationEnd" : "2016-12-25",
"length" : 12.5,
"weight": 5000
},
"holidays/2016-12-20/lambeosaurus": true,
"holidays/2016-12-21/lambeosaurus": true,
"holidays/2016-12-22/lambeosaurus": true,
"holidays/2016-12-23/lambeosaurus": true,
"holidays/2016-12-24/lambeosaurus": true,
"holidays/2016-12-25/lambeosaurus": true
});
A query on holidays/2016-12-20 would then return an object with keys for each holidaying dinosaur.
I recommend storing the data in a different way. Use one attribute for "vacationStart" and one for "vacationEnd". And also, put all the dinosaurs inside a "dinousaurs" node, and not in the root node.
So in order to query for all dinosaurs that will be away on vacation on 2016-12-20, you would use the query:
var query = firebase.database().ref('dinousaurs').orderByChild('vacationStart').equalTo('2016-12-20');
I have multiple subscriptions on the search results page of a Meteor app. So in the middle is a template for the search results of items, on the left is a template for trending items, and on the right is a template for related items.
On the server I publish a related items (on the right) by querying the Mongodb using a text search which is only possible on the server since minimongo doesn't have that functionality.
But I am also subscribing to a trending items (on the left) which grabs a different set of those same items. In isolation, I receive the correct results, that is, when I comment out the code for the trending items, I get the right results for the related items. And vice versa. But when both are left, it appears they are drawing from the same collection on the client and the results are distorted.
Is there any way to handle multiple subscriptions on the same page?
trendingItems.js
Meteor.subscribe('trendingItems');
Template.trendingItems.helpers ({
trendingItems: function() {
results = Items.find({}, {
fields : { follows : 1, title: 1, itemId: 1 },
sort : { follows : -1 },
limit : 5
}).fetch();
return results;
}
});
relatedItems.js
Template.relatedItems.helpers ({
relatedItems: function() {
return Items.find();
}
});
publications.js
Meteor.publish('relatedItems', function(searchString) {
return Items.find(
{ $text: { $search: searchString } }
);
});
Meteor.publish('trendingItems', function(options) {
results = Items.find({}, {
fields : { follows : 1, title: 1, itemId: 1 },
sort : { follows : -1 },
limit : 5
}).fetch();
return results;
});
A general solution to the problem of handling multiple subscriptions, rather than a specific solution that solves just this problem, is desirable.