I have a data collection in mongodb like below:
{
{ "_id" : ObjectId("1"), "name" : "ABC", "group" : [ObjectId("11"), ObjectId("12"), ObjectId("13")]}
{ "_id" : ObjectId("2"), "name" : "DEF", "group" : [ObjectId("21"), ObjectId("22"), ObjectId("23")]}
}
I want to delete ObjectId("11") in the group field in document
ObjectId("1").
I tried the code below:
aId = "1"
bId = "11"
db.collection.updateOne({ _id: ObjectId(aId) }, { $pull: { group: { _id: ObjectId(bId) } } })
But failed.
I have also tried:
aId = "1"
bId = "11"
db.collection.updateOne({ _id: aId }, { $pull: { group: { _id: bId } } })
But still failed to delete it.
Anything wrong with my code?
Use $unset like this:
db.products.update(
{ sku: "unknown" },
{ $unset: { quantity: "", instock: "" } })
This code will remove the fields quantity and instock from the firstdocument in the products collection where the field sku has a value of unknown.
For more info
Related
I have a set of documents (messages) in MongoDB collection as below. I want to just preserve the latest 500 records for individual user pairs. Users are identified as sentBy and sentTo.
/* 1 */
{
"_id" : ObjectId("5f1c1b00c62e9b9aafbe1d6c"),
"sentAt" : ISODate("2020-07-25T11:44:00.004Z"),
"readAt" : ISODate("1970-01-01T00:00:00.000Z"),
"msgBody" : "dummy text",
"msgType" : "text",
"sentBy" : ObjectId("54d6732319f899c704b21ef7"),
"sentTo" : ObjectId("54d6732319f899c704b21ef5"),
}
/* 2 */
{
"_id" : ObjectId("5f1c1b3cc62e9b9aafbe1d6d"),
"sentAt" : ISODate("2020-07-25T11:45:00.003Z"),
"readAt" : ISODate("1970-01-01T00:00:00.000Z"),
"msgBody" : "dummy text",
"msgType" : "text",
"sentBy" : ObjectId("54d6732319f899c704b21ef9"),
"sentTo" : ObjectId("54d6732319f899c704b21ef8"),
}
/* 3 */
{
"_id" : ObjectId("5f1c1b78c62e9b9aafbe1d6e"),
"sentAt" : ISODate("2020-07-25T11:46:00.003Z"),
"readAt" : ISODate("1970-01-01T00:00:00.000Z"),
"msgBody" : "dummy text",
"msgType" : "text",
"sentBy" : ObjectId("54d6732319f899c704b21ef6"),
"sentTo" : ObjectId("54d6732319f899c704b21ef8"),
}
/* 4 */
{
"_id" : ObjectId("5f1c1c2e1449dd9bbef28575"),
"sentAt" : ISODate("2020-07-25T11:49:02.012Z"),
"readAt" : ISODate("1970-01-01T00:00:00.000Z"),
"msgBody" : "dummy text",
"msgType" : "text",
"sentBy" : ObjectId("54cfcf93e2b8994c25077924"),
"sentTo" : ObjectId("54d6732319f899c704b21ef5"),
}
/* and soon... assume it to be 10k+ */
Algo that came to my mind is -
Grouping first based on the OR operator
Sorting the records in descending order on a timely basis
Limit it to 500
Get the array of _id that should be preserved
Pass the ID(s) to new mongo query .deleteMany() with $nin condition
Please help I struggled a lot on this, and have not got any success. Many Thanks :)
Depending on scale I would do one of the two following:
Assuming scale is somewhat low and you can actually group the entire collection in a reasonable time I would do something similar to what you suggjested:
db.collection.aggregate([
{
$sort: {
sentAt: 1
}
},
{
$group: {
_id: {
$cond: [
{$gt: ["$sentBy", "$sentTo"]},
["$sendBy", "$sentTo"],
["$sentTo", "$sendBy"],
]
},
roots: {$push: "$$ROOT"}
}
},
{
$project: {
roots: {$slice: ["$roots", -500]}
}
},
{
$unwind: "$roots"
},
{
$replaceRoot: {
newRoot: "$roots"
}
},
{
$out: "this_collection"
}
])
The sort stage has to come first as you can't sort an inner array post group, the $cond in the group stage simulates the $or operator logic which can't be used there. finally instead of retrieving the result than using deleteMany with $nin you can just use $out to rewrite the current collection.
If scale is way too big to support this then you should just iterate user by user and do what you suggested at first, here is a quick example:
let userIds = await db.collection.distinct("sentBy");
let done = [1];
for (let i = 0; i < userIds.length; i++) {
let matches = await db.collection.aggregate([
{
$match: {
$and: [
{
$or: [
{
"sentTo": userIds[i]
},
{
"sendBy": userIds[i]
}
]
},
{ // this is not necessary it's just to avoid running on ZxY and YxZ
$or: [
{
sendTo: {$nin: done}
},
{
sendBy: {$nin: done}
}
]
}
]
}
},
{
$sort: {
sentAt: 1
}
},
{
$group: {
_id: {
$cond: [
{$eq: ["$sentBy", userIds[i]]},
"$sendTo",
"$sentBy"
]
},
roots: {$push: "$$ROOT"}
}
},
{
$project: {
roots: {$slice: ["$roots", -500]}
}
},
{
$unwind: "$roots"
},
{
$group: {
_id: null,
keepers: {$push: "$roots._id"}
}
}
]).toArray();
if (matches.length) {
await db.collection.deleteMany(
{
$and: [
{
$or: [
{
"sentTo": userIds[i]
},
{
"sendBy": userIds[i]
}
]
},
{ // this is only necessary if you used it above.
$or: [
{
sendTo: {$nin: done}
},
{
sendBy: {$nin: done}
}
]
},
{
_id: {$nin: matches[0].keepers}
}
]
}
)
}
done.push(userIds[i])
}
I have a document of a following structure.
{
name: "John Doe",
City : "OK",
Prepaid: "Y"
},
{
name: "Jane Doe",
City : "CA",
Prepaid: "N"
},
{
name: "Jule Doe",
City : "OK",
Prepaid: "N"
},
{
name: "Jake Doe",
City : "OK",
Prepaid: "Y"
}
I would like to group this first based on the city and then Prepaid and get individual counts of each prepaid types. Something that looks similar to this.
{
City : OK
Count : {
"filter": prepaid,
"count": {
Y : 2
N: 1
}
}
}
{
City : CA
Count : {
"filter": prepaid,
"count": {
Y : 0
N: 1
}
}
}
I tried doing aggregation based on multiple fields and it gives me the total count of the documents and not the breakdowns.
Here's what I tried for my aggregation pipeline:
db.collection.aggregate([
{$match:matchquery
},{$group:{_id:{city: '$city', prepaid: '$prepaid' }, count:{$sum:1}
}}
])
You can run $group twice to count by prepaid first and then you can apply $arrayToObject to get Y/N as object keys:
db.collection.aggregate([
{
$group: {
_id: { city: "$City", prepaid: "$Prepaid" },
total: { $sum: 1 }
}
},
{
$grop: {
_id: "$_id.city",
Count: {
$push: {
k: "$_id.prepaid", v: "$total"
}
}
}
},
{
$project: {
_id: 0,
city: "$_id",
Count: { $mergeObjects: [ { filter: "prepaid" }, { count: { $arrayToObject: "$Count" } } ] }
}
}
])
Mongo Playground
My mongodb data is like this,i want to filter the memoryLine.
{
"_id" : ObjectId("5e36950f65fae21293937594"),
"userId" : "5e33ee0b4a3895a6d246f3ee",
"notes" : [
{
"noteId" : ObjectId("5e36953665fae212939375a0"),
"time" : ISODate("2020-02-02T17:24:06.460Z"),
"memoryLine" : [
{
"_id" : ObjectId("5e36953665fae212939375ab"),
"memoryTime" : ISODate("2020-02-03T17:54:06.460Z")
},
{
"_id" : ObjectId("5e36953665fae212939375aa"),
"memoryTime" : ISODate("2020-02-03T05:24:06.460Z")
}
]
}
]}
i want to get the item which memoryTime is great than now as expected like this,
"userId" : "5e33ee0b4a3895a6d246f3ee",
"notes" : [
{
"noteId" : ObjectId("5e36953665fae212939375a0"),
"time" : ISODate("2020-02-02T17:24:06.460Z"),
"memoryLine" : [
{
"_id" : ObjectId("5e36953665fae212939375ab"),
"memoryTime" : ISODate("2020-02-03T17:54:06.460Z")
},
{
"_id" : ObjectId("5e36953665fae212939375aa"),
"memoryTime" : ISODate("2020-02-03T05:24:06.460Z")
}
]
}]
so is use code as below.i use a $filter in memoryLine to filter to get the right item.
aggregate([{
$match: {
"$and": [
{ userId: "5e33ee0b4a3895a6d246f3ee"},
]
}
}, {
$project: {
userId: 1,
notes: {
noteId: 1,
time: 1,
memoryLine: {
$filter: {
input: "$memoryLine",
as: "mLine",
cond: { $gt: ["$$mLine.memoryTime", new Date(new Date().getTime() + 8 * 1000 * 3600)] }
}
}
}
}
}]).then(doc => {
res.json({
code: 200,
message: 'success',
result: doc
})
});
but i got this,memoryLine is null,why?I try to change $gt to $lt, but also got null.
"userId" : "5e33ee0b4a3895a6d246f3ee",
"notes" : [
{
"noteId" : ObjectId("5e36953665fae212939375a0"),
"time" : ISODate("2020-02-02T17:24:06.460Z"),
"memoryLine" : null <<<------------- here is not right
}]
You can use $addFields to replace existing field, $map for outer collection and $filter for inner:
db.collection.aggregate([
{
$addFields: {
notes: {
$map: {
input: "$notes",
in: {
$mergeObjects: [
"$$this",
{
memoryLine: {
$filter: {
input: "$$this.memoryLine",
as: "ml",
cond: {
$gt: [ "$$ml.memoryTime", new Date() ]
}
}
}
}
]
}
}
}
}
}
])
$mergeObjects is used to avoid repeating fields from source memoryLine object.
Mongo Playground
This is how my db document looks like:
{
"_id" : "aN2jGuR2rSzDx87LB",
"content" : {
"en" : [
{
"content" : "Item 1",
"timestamp" : 1518811796
}
]
}
}
Now I need to add another field in the first object of the content.en array.
The document itself gets selected by an ID.
The result should be:
{
"_id" : "aN2jGuR2rSzDx87LB",
"content" : {
"en" : [
{
"content" : "Item 1",
"timestamp" : 1518811796,
"user" : {
"id" : 'userId'
}
}
]
}
}
I tried to do it like this, but nothing is happening. I don't even get an error message.
Content.update(
{ _id: id },
{
$addToSet: {
'content.en.0.user': {
id: 'userId',
}
}
}
)
Also I would like to use an variable for the language. How do I do that? Something like 'content.' + language + '.0.user'...
$addToSet is useful when you want to add someting to an array. In your case you want to modify first element of your array (at index 0) so you should simply use $set (which is field update operator):
Content.update(
{ _id: "aN2jGuR2rSzDx87LB" },
{
$set: {
"content.en.0.user": {
id: "userId",
}
}
}
)
I have a collection of the structure as follows:
collection name : "positions"
Structure
{
"_id" : "vtQ3tFXg8THF3TNBc",
"candidatesActions" : {
"sourced" : [ ],
},
"appFormObject" : {
"name" : "✶ mandatory",
"questions" : [
{
"qusId" : "qs-585494",
"type" : "simple",
"qus" : "Which was your previous company"
},
{
"qusId" : "qs-867766",
"type" : "yesNo",
"qus" : "Are you willing to relocate?",
"disqualify" : "true"
}
]
}
}
I want to update "qus" field of the above collection whose _id is "vtQ3tFXg8THF3TNBc" and "qusId" is "qs-585494".
Try following....
db.positions.update(
{_id: "vtQ3tFXg8THF3TNBc", "appFormObject.questions.qusId":"qs-585494"},
{$set:{"appFormObject.questions.$.qus": "this is updated value"}}
)
Use following query
db.positions.findAndModify({
query: { _id: "vtQ3tFXg8THF3TNBc", "appFormObject.questions.qusId":"qs-585494"} ,
update: { $set: { 'appFormObject.questions.$.qus': 'Brilliant Green' } },
});
Thanks