How to use an array with prisma upsert in transaction - javascript

I'm trying to pass an array in my request. My array can have up to 6 entries, but how to use them all with upsert using Prisma ?
The models used :
model Board {
id String #id #unique #default(uuid())
title String
User User #relation(fields: [userId], references: [id], onDelete: Cascade, onUpdate: Cascade)
userId Int
columns Column[]
tasks Task[]
}
model Column {
id Int #id #default(autoincrement())
column String
Board Board #relation(fields: [boardId], references: [id], onDelete: Cascade, onUpdate: Cascade)
boardId String
tasks Task[]
}
The function :
const columns = req.body.columns;
const updateBoardData = prisma.board.update({
where: { id: req.body.id },
data: { title: req.body.title },
});
const upsertColumns = prisma.column.upsert({
where: { id: 65 }, //If id exists, it's updated
update: columns[0],
create: columns[0],
});
await prisma
.$transaction([updateBoardData, upsertColumns])
.then((board) => {
return res.status(201).json({ board });
})
.catch((err) => {
console.log(err);
return res.status(500).json(`${err}`);
})
.finally(() => {
return prisma.$disconnect();
});
The problem with this function is, if the id exists,the update is done on the right column (the table i want to apply the update or create), but if the id doesn't exist, i am having this following error :
PrismaClientKnownRequestError:
Invalid `prisma.board.update()` invocation in
C:\Users\Sébastien\Desktop\Kanban\Server\src\routes\boards\boards.controllers.js:170:42
167 const columns = req.body.columns;
168 const boardId = req.body.id;
169
→ 170 const updateBoardData = prisma.board.update(
Unique constraint failed on the constraint: `PRIMARY`
at RequestHandler.handleRequestError (C:\Users\Sébastien\Desktop\Kanban\Server\node_modules\#prisma\client\runtime\index.js:34310:13)
at RequestHandler.request (C:\Users\Sébastien\Desktop\Kanban\Server\node_modules\#prisma\client\runtime\index.js:34293:12)
at async PrismaClient._request (C:\Users\Sébastien\Desktop\Kanban\Server\node_modules\#prisma\client\runtime\index.js:35273:16)
at async Promise.all (index 0)
at async updateBoard (C:\Users\Sébastien\Desktop\Kanban\Server\src\routes\boards\boards.controllers.js:189:5) {
code: 'P2002',
clientVersion: '4.6.0',
meta: { target: 'PRIMARY' }
}
The request sent via postman is the following one :
{
"id": "fe6e843a-bcab-4404-85d0-8324794f04cc",
"title": "TITLE UPDATED",
"columns": [
{ "id": 65, "column": "TODO", "boardId": "fe6e843a-bcab-4404-85d0-8324794f04cc"},
{ "id":66, "column": "DONE", "boardId": "fe6e843a-bcab-4404-85d0-8324794f04cc"},
{ "column": "NEW COLUMN", "boardId": "fe6e843a-bcab-4404-85d0-8324794f04cc" }
]
}
UPDATE : Console.log(req.body)
{
id: 'fe6e843a-bcab-4404-85d0-8324794f04cc',
title: 'TITLE UPDATED',
columns: [
{
id: 65,
column: 'TODO',
boardId: 'fe6e843a-bcab-4404-85d0-8324794f04cc'
},
{
id: 66,
column: 'DONE',
boardId: 'fe6e843a-bcab-4404-85d0-8324794f04cc'
},
{
column: 'NEW COLUMN',
boardId: 'fe6e843a-bcab-4404-85d0-8324794f04cc'
}
]
}
So my two issues are, how to make update and create working together ? And how make it dynamically while using every entries of the columns array.

Related

Prisma - create Post with N amount of Categories (explicit Many-to-Many)

I have the most basic explicit Many-to-Many relation:
model Category {
id Int #id #default(autoincrement())
title String #db.VarChar(24)
posts PostCategory[]
}
model Post {
id Int #id #default(autoincrement())
title String #db.VarChar(24)
categories PostCategory[]
}
model PostCategory {
category Category #relation(fields: [categoryId], references: [id])
categoryId Int
post Post #relation(fields: [postId], references: [id])
postId Int
##id([categoryId, postId])
##unique([categoryId, postId])
}
What I now try to accomplish is to create a new Post with n categories.
So lets say we have an String array with n amount of category titles:
const myStringArray = ["Category1", "Category2", "Category3", ...];
How can I create one query that adds all of them to my new created post?
If I put it in static it is not a problem, but how can I handle a list where I don't know the size?
const assignCategories = await prisma.post.create({
data: {
title: 'First Post Title',
categories: {
create: [
{
category: {
create: {
name: myStringArray[0],
},
},
},
{
category: {
create: {
name: myStringArray[1],
},
},
},
],
},
},
})
You can use Array.map() to map each string element into an object of the required shape
const myStringArray = ['Category1', 'Category2', 'Category3'];
const assignCategories = await prisma.post.create({
data: {
title: 'First Post Title',
categories: {
create: myStringArray.map((title) => ({
category: { create: { title } },
})),
},
},
});
Array.map() docs:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map

Update value inside mongodb array object

I'm trying to update a value inside my array of objects.
Looking at the above mongoDB schema what I want is:
Find an expense with the ID match with the _id and need to update the fields with new ones from the req.body.
Just need to update the: expensesType, description, price and status.
The following code is what I tried to do.
First I need to match the right expense and it works fine but when I try to house.save() show me a message 'house.save is not a function'. So I think maybe I need to use a mongoDB function to get the result.
router.put("/editExpense/:id", ensureAuthenticated, (req, res) => {
var id = mongoose.Types.ObjectId(req.params.id);
House.find(
{ "expensesHouse._id": id },
{
members: 1,
name: 1,
description: 1,
address: 1,
type: 1,
user: 1,
userID: 1,
userType: 1,
expensesHouse: { $elemMatch: { _id: id } },
date: 1
}
).then(house => {
console.log(house);
expenseType = req.body.expenseType;
description = req.body.description;
price = req.body.price;
status = req.body.status;
house.save().then(() => {
req.flash("success_msg", "Expenses Updated");
res.redirect("/houses/dashboard");
});
});
});
****** UPDATED ******
After a search I found this updateOne and after adjusts, this is my final result but this way I delete every record..
router.put("/editExpense/:id", ensureAuthenticated, (req, res) => {
var id = mongoose.Types.ObjectId(req.params.id);
House.updateOne(
{ "expensesHouse._id": id },
{
members: 1,
name: 1,
description: 1,
address: 1,
type: 1,
user: 1,
userID: 1,
userType: 1,
expensesHouse: { $elemMatch: { _id: id } },
date: 1
},
{ $set: { "expensesHouse.expenseType": req.body.expenseType } }
).then(house => {
req.flash("success_msg", "Expenses Updated");
res.redirect("/houses/dashboard");
});
});
*********** RESOLUTION ***********
I just fixed the problem the way I show below.
House.updateOne(
{ "expensesHouse._id": id },
{
$set: {
expensesHouse: {
expenseType: req.body.expenseType,
description: req.body.description,
price: req.body.price,
status: req.body.status
}
}
}
You are really close to the answer the problem right now that you are having is syntax difference between find and UpdateOne
This is what Find expects, Check MongoDB docs
db.collection.find(query, projection)
This is what updateOne expects, Check Mongo docs
db.collection.updateOne(
<filter>,
<update>,
{
upsert: <boolean>,
writeConcern: <document>,
collation: <document>,
arrayFilters: [ <filterdocument1>, ... ],
hint: <document|string> // Available starting in MongoDB 4.2.1
}
)
See the Difference? Second parameter should be update not projection because Update one
returns
matchedCount containing the number of matched documents
modifiedCount containing the number of modified documents
upsertedId containing the _id for the upserted document.
A boolean acknowledged as true if the operation ran with write concern or false if write concern was disabled.
So Your code should be
House.updateOne(
{ "expensesHouse._id": id },
{ $set: { "expensesHouse.expenseType": req.body.expenseType } }
).then(house => {
req.flash("success_msg", "Expenses Updated");
res.redirect("/houses/dashboard");
});
});
House.findOneAndUpdate({userId : req.params.userId},
{ $set: { "expensesHouse.$[element].status": req.body.status } },
{ multi:true, arrayFilters: [{ "element.userID" : req.params.subUserId }], new:true })
Your Api reuquest consist of both the IDs (outer as well as inner) like /api/update/:userId/:subUserId

Mongoose get collection, where ref [duplicate]

I'm pretty new to Mongoose and MongoDB in general so I'm having a difficult time figuring out if something like this is possible:
Item = new Schema({
id: Schema.ObjectId,
dateCreated: { type: Date, default: Date.now },
title: { type: String, default: 'No Title' },
description: { type: String, default: 'No Description' },
tags: [ { type: Schema.ObjectId, ref: 'ItemTag' }]
});
ItemTag = new Schema({
id: Schema.ObjectId,
tagId: { type: Schema.ObjectId, ref: 'Tag' },
tagName: { type: String }
});
var query = Models.Item.find({});
query
.desc('dateCreated')
.populate('tags')
.where('tags.tagName').in(['funny', 'politics'])
.run(function(err, docs){
// docs is always empty
});
Is there a better way do this?
Edit
Apologies for any confusion. What I'm trying to do is get all Items that contain either the funny tag or politics tag.
Edit
Document without where clause:
[{
_id: 4fe90264e5caa33f04000012,
dislikes: 0,
likes: 0,
source: '/uploads/loldog.jpg',
comments: [],
tags: [{
itemId: 4fe90264e5caa33f04000012,
tagName: 'movies',
tagId: 4fe64219007e20e644000007,
_id: 4fe90270e5caa33f04000015,
dateCreated: Tue, 26 Jun 2012 00:29:36 GMT,
rating: 0,
dislikes: 0,
likes: 0
},
{
itemId: 4fe90264e5caa33f04000012,
tagName: 'funny',
tagId: 4fe64219007e20e644000002,
_id: 4fe90270e5caa33f04000017,
dateCreated: Tue, 26 Jun 2012 00:29:36 GMT,
rating: 0,
dislikes: 0,
likes: 0
}],
viewCount: 0,
rating: 0,
type: 'image',
description: null,
title: 'dogggg',
dateCreated: Tue, 26 Jun 2012 00:29:24 GMT
}, ... ]
With the where clause, I get an empty array.
With a modern MongoDB greater than 3.2 you can use $lookup as an alternate to .populate() in most cases. This also has the advantage of actually doing the join "on the server" as opposed to what .populate() does which is actually "multiple queries" to "emulate" a join.
So .populate() is not really a "join" in the sense of how a relational database does it. The $lookup operator on the other hand, actually does the work on the server, and is more or less analogous to a "LEFT JOIN":
Item.aggregate(
[
{ "$lookup": {
"from": ItemTags.collection.name,
"localField": "tags",
"foreignField": "_id",
"as": "tags"
}},
{ "$unwind": "$tags" },
{ "$match": { "tags.tagName": { "$in": [ "funny", "politics" ] } } },
{ "$group": {
"_id": "$_id",
"dateCreated": { "$first": "$dateCreated" },
"title": { "$first": "$title" },
"description": { "$first": "$description" },
"tags": { "$push": "$tags" }
}}
],
function(err, result) {
// "tags" is now filtered by condition and "joined"
}
)
N.B. The .collection.name here actually evaluates to the "string" that is the actual name of the MongoDB collection as assigned to the model. Since mongoose "pluralizes" collection names by default and $lookup needs the actual MongoDB collection name as an argument ( since it's a server operation ), then this is a handy trick to use in mongoose code, as opposed to "hard coding" the collection name directly.
Whilst we could also use $filter on arrays to remove the unwanted items, this is actually the most efficient form due to Aggregation Pipeline Optimization for the special condition of as $lookup followed by both an $unwind and a $match condition.
This actually results in the three pipeline stages being rolled into one:
{ "$lookup" : {
"from" : "itemtags",
"as" : "tags",
"localField" : "tags",
"foreignField" : "_id",
"unwinding" : {
"preserveNullAndEmptyArrays" : false
},
"matching" : {
"tagName" : {
"$in" : [
"funny",
"politics"
]
}
}
}}
This is highly optimal as the actual operation "filters the collection to join first", then it returns the results and "unwinds" the array. Both methods are employed so the results do not break the BSON limit of 16MB, which is a constraint that the client does not have.
The only problem is that it seems "counter-intuitive" in some ways, particularly when you want the results in an array, but that is what the $group is for here, as it reconstructs to the original document form.
It's also unfortunate that we simply cannot at this time actually write $lookup in the same eventual syntax the server uses. IMHO, this is an oversight to be corrected. But for now, simply using the sequence will work and is the most viable option with the best performance and scalability.
Addendum - MongoDB 3.6 and upwards
Though the pattern shown here is fairly optimized due to how the other stages get rolled into the $lookup, it does have one failing in that the "LEFT JOIN" which is normally inherent to both $lookup and the actions of populate() is negated by the "optimal" usage of $unwind here which does not preserve empty arrays. You can add the preserveNullAndEmptyArrays option, but this negates the "optimized" sequence described above and essentially leaves all three stages intact which would normally be combined in the optimization.
MongoDB 3.6 expands with a "more expressive" form of $lookup allowing a "sub-pipeline" expression. Which not only meets the goal of retaining the "LEFT JOIN" but still allows an optimal query to reduce results returned and with a much simplified syntax:
Item.aggregate([
{ "$lookup": {
"from": ItemTags.collection.name,
"let": { "tags": "$tags" },
"pipeline": [
{ "$match": {
"tags": { "$in": [ "politics", "funny" ] },
"$expr": { "$in": [ "$_id", "$$tags" ] }
}}
]
}}
])
The $expr used in order to match the declared "local" value with the "foreign" value is actually what MongoDB does "internally" now with the original $lookup syntax. By expressing in this form we can tailor the initial $match expression within the "sub-pipeline" ourselves.
In fact, as a true "aggregation pipeline" you can do just about anything you can do with an aggregation pipeline within this "sub-pipeline" expression, including "nesting" the levels of $lookup to other related collections.
Further usage is a bit beyond the scope of what the question here asks, but in relation to even "nested population" then the new usage pattern of $lookup allows this to be much the same, and a "lot" more powerful in it's full usage.
Working Example
The following gives an example using a static method on the model. Once that static method is implemented the call simply becomes:
Item.lookup(
{
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
},
callback
)
Or enhancing to be a bit more modern even becomes:
let results = await Item.lookup({
path: 'tags',
query: { 'tagName' : { '$in': [ 'funny', 'politics' ] } }
})
Making it very similar to .populate() in structure, but it's actually doing the join on the server instead. For completeness, the usage here casts the returned data back to mongoose document instances at according to both the parent and child cases.
It's fairly trivial and easy to adapt or just use as is for most common cases.
N.B The use of async here is just for brevity of running the enclosed example. The actual implementation is free of this dependency.
const async = require('async'),
mongoose = require('mongoose'),
Schema = mongoose.Schema;
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
mongoose.connect('mongodb://localhost/looktest');
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
dateCreated: { type: Date, default: Date.now },
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
});
itemSchema.statics.lookup = function(opt,callback) {
let rel =
mongoose.model(this.schema.path(opt.path).caster.options.ref);
let group = { "$group": { } };
this.schema.eachPath(p =>
group.$group[p] = (p === "_id") ? "$_id" :
(p === opt.path) ? { "$push": `$${p}` } : { "$first": `$${p}` });
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": opt.path,
"localField": opt.path,
"foreignField": "_id"
}},
{ "$unwind": `$${opt.path}` },
{ "$match": opt.query },
group
];
this.aggregate(pipeline,(err,result) => {
if (err) callback(err);
result = result.map(m => {
m[opt.path] = m[opt.path].map(r => rel(r));
return this(m);
});
callback(err,result);
});
}
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
function log(body) {
console.log(JSON.stringify(body, undefined, 2))
}
async.series(
[
// Clean data
(callback) => async.each(mongoose.models,(model,callback) =>
model.remove({},callback),callback),
// Create tags and items
(callback) =>
async.waterfall(
[
(callback) =>
ItemTag.create([{ "tagName": "movies" }, { "tagName": "funny" }],
callback),
(tags, callback) =>
Item.create({ "title": "Something","description": "An item",
"tags": tags },callback)
],
callback
),
// Query with our static
(callback) =>
Item.lookup(
{
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
},
callback
)
],
(err,results) => {
if (err) throw err;
let result = results.pop();
log(result);
mongoose.disconnect();
}
)
Or a little more modern for Node 8.x and above with async/await and no additional dependencies:
const { Schema } = mongoose = require('mongoose');
const uri = 'mongodb://localhost/looktest';
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
dateCreated: { type: Date, default: Date.now },
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
});
itemSchema.statics.lookup = function(opt) {
let rel =
mongoose.model(this.schema.path(opt.path).caster.options.ref);
let group = { "$group": { } };
this.schema.eachPath(p =>
group.$group[p] = (p === "_id") ? "$_id" :
(p === opt.path) ? { "$push": `$${p}` } : { "$first": `$${p}` });
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": opt.path,
"localField": opt.path,
"foreignField": "_id"
}},
{ "$unwind": `$${opt.path}` },
{ "$match": opt.query },
group
];
return this.aggregate(pipeline).exec().then(r => r.map(m =>
this({ ...m, [opt.path]: m[opt.path].map(r => rel(r)) })
));
}
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
const log = body => console.log(JSON.stringify(body, undefined, 2));
(async function() {
try {
const conn = await mongoose.connect(uri);
// Clean data
await Promise.all(Object.entries(conn.models).map(([k,m]) => m.remove()));
// Create tags and items
const tags = await ItemTag.create(
["movies", "funny"].map(tagName =>({ tagName }))
);
const item = await Item.create({
"title": "Something",
"description": "An item",
tags
});
// Query with our static
const result = (await Item.lookup({
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
})).pop();
log(result);
mongoose.disconnect();
} catch (e) {
console.error(e);
} finally {
process.exit()
}
})()
And from MongoDB 3.6 and upward, even without the $unwind and $group building:
const { Schema, Types: { ObjectId } } = mongoose = require('mongoose');
const uri = 'mongodb://localhost/looktest';
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
},{ timestamps: true });
itemSchema.statics.lookup = function({ path, query }) {
let rel =
mongoose.model(this.schema.path(path).caster.options.ref);
// MongoDB 3.6 and up $lookup with sub-pipeline
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": path,
"let": { [path]: `$${path}` },
"pipeline": [
{ "$match": {
...query,
"$expr": { "$in": [ "$_id", `$$${path}` ] }
}}
]
}}
];
return this.aggregate(pipeline).exec().then(r => r.map(m =>
this({ ...m, [path]: m[path].map(r => rel(r)) })
));
};
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
const log = body => console.log(JSON.stringify(body, undefined, 2));
(async function() {
try {
const conn = await mongoose.connect(uri);
// Clean data
await Promise.all(Object.entries(conn.models).map(([k,m]) => m.remove()));
// Create tags and items
const tags = await ItemTag.insertMany(
["movies", "funny"].map(tagName => ({ tagName }))
);
const item = await Item.create({
"title": "Something",
"description": "An item",
tags
});
// Query with our static
let result = (await Item.lookup({
path: 'tags',
query: { 'tagName': { '$in': [ 'funny', 'politics' ] } }
})).pop();
log(result);
await mongoose.disconnect();
} catch(e) {
console.error(e)
} finally {
process.exit()
}
})()
what you are asking for isn't directly supported but can be achieved by adding another filter step after the query returns.
first, .populate( 'tags', null, { tagName: { $in: ['funny', 'politics'] } } ) is definitely what you need to do to filter the tags documents. then, after the query returns you'll need to manually filter out documents that don't have any tags docs that matched the populate criteria. something like:
query....
.exec(function(err, docs){
docs = docs.filter(function(doc){
return doc.tags.length;
})
// do stuff with docs
});
Try replacing
.populate('tags').where('tags.tagName').in(['funny', 'politics'])
by
.populate( 'tags', null, { tagName: { $in: ['funny', 'politics'] } } )
Update: Please take a look at the comments - this answer does not correctly match to the question, but maybe it answers other questions of users which came across (I think that because of the upvotes) so I will not delete this "answer":
First: I know this question is really outdated, but I searched for exactly this problem and this SO post was the Google entry #1. So I implemented the docs.filter version (accepted answer) but as I read in the mongoose v4.6.0 docs we can now simply use:
Item.find({}).populate({
path: 'tags',
match: { tagName: { $in: ['funny', 'politics'] }}
}).exec((err, items) => {
console.log(items.tags)
// contains only tags where tagName is 'funny' or 'politics'
})
Hope this helps future search machine users.
After having the same problem myself recently, I've come up with the following solution:
First, find all ItemTags where tagName is either 'funny' or 'politics' and return an array of ItemTag _ids.
Then, find Items which contain all ItemTag _ids in the tags array
ItemTag
.find({ tagName : { $in : ['funny','politics'] } })
.lean()
.distinct('_id')
.exec((err, itemTagIds) => {
if (err) { console.error(err); }
Item.find({ tag: { $all: itemTagIds} }, (err, items) => {
console.log(items); // Items filtered by tagName
});
});
#aaronheckmann 's answer worked for me but I had to replace return doc.tags.length; to return doc.tags != null; because that field contain null if it doesn't match with the conditions written inside populate.
So the final code:
query....
.exec(function(err, docs){
docs = docs.filter(function(doc){
return doc.tags != null;
})
// do stuff with docs
});

append to list if exist or add list in dynamoDB

I have a product table in DynamoDB which has some items. Now I need to add list of buyers to the product which can grow i.e. append to list. It works for if I have an empty list or a list with some items in the table item but for the first addition it throws an error. Is there any way to check if list exists then append else add a list. here is my code
let params = {
TableName: "product",
ExpressionAttributeNames: {
"#Y": "buyer"
},
ExpressionAttributeValues: {
":y": ["PersonXYZ"]
},
Key: {
id: 'Hy2H4Z-lf'
},
UpdateExpression: "SET #Y = list_append(#Y,:y)"
};
updateItemInDDB(params).then((data) => {
res.status(200).send(data);
}, err => {
console.log(err);
res.sendStatus(500);
});
UpdateItemInDDB is just a function which takes a params and run dnamodb code on it. I am using javascript sdk for DynamoDB with Document Client.
var params = {
TableName: "user",
Key: {
"user_id": {
S: user_id
}
},
UpdateExpression: "SET #ri = list_append(if_not_exists(#ri, :empty_list), :vals)",
ExpressionAttributeNames: {
"#ri": "images"
},
ExpressionAttributeValues: {
":vals": {"L": [{
"S": "dummy data"
}]},
":empty_list": {"L": []}
},
ReturnValues: 'ALL_NEW'
};
":empty_list": {"L": []}
this is important if you want to set empty list, otherwise it will give below exception.
"errorMessage": "ExpressionAttributeValues contains invalid value: Supplied AttributeValue is empty, must contain exactly one of the supported datatypes for key :empty_list",
"errorType": "ValidationException"
EDIT: Nest the conditional expressions
You could run SET append_list with a ConditionalExpression that the attribute does exist, then if that fails run SET with a ConditinalExpression that the attribute does not exist.
let params1 = {
TableName: "product",
ExpressionAttributeNames: {
"#Y": "buyer"
},
ExpressionAttributeValues: {
":y": ["PersonXYZ"]
},
Key: {
id: 'Hy2H4Z-lf'
},
ConditionExpression: "attribute_exists(buyer)",
UpdateExpression: "SET #Y = list_append(#Y,:y)"
};
updateItemInDDB(params1).then((data) => {
res.status(200).send(data);
}, err => {
console.log(err);
let params2 = {
TableName: "product",
ExpressionAttributeNames: {
"#Y": "buyer"
},
ExpressionAttributeValues: {
":y": ["PersonXYZ"]
},
Key: {
id: 'Hy2H4Z-lf'
},
ConditionExpression: "attribute_not_exists(buyer)",
UpdateExpression: "SET #Y = (#Y,:y)"
};
updateItemInDDB(params2).then((data) => {
res.status(200).send(data);
}, err => {
console.log(err);
res.sendStatus(500);
});
});

Mongoose, find documents that contain a value in an array of objects [duplicate]

I'm pretty new to Mongoose and MongoDB in general so I'm having a difficult time figuring out if something like this is possible:
Item = new Schema({
id: Schema.ObjectId,
dateCreated: { type: Date, default: Date.now },
title: { type: String, default: 'No Title' },
description: { type: String, default: 'No Description' },
tags: [ { type: Schema.ObjectId, ref: 'ItemTag' }]
});
ItemTag = new Schema({
id: Schema.ObjectId,
tagId: { type: Schema.ObjectId, ref: 'Tag' },
tagName: { type: String }
});
var query = Models.Item.find({});
query
.desc('dateCreated')
.populate('tags')
.where('tags.tagName').in(['funny', 'politics'])
.run(function(err, docs){
// docs is always empty
});
Is there a better way do this?
Edit
Apologies for any confusion. What I'm trying to do is get all Items that contain either the funny tag or politics tag.
Edit
Document without where clause:
[{
_id: 4fe90264e5caa33f04000012,
dislikes: 0,
likes: 0,
source: '/uploads/loldog.jpg',
comments: [],
tags: [{
itemId: 4fe90264e5caa33f04000012,
tagName: 'movies',
tagId: 4fe64219007e20e644000007,
_id: 4fe90270e5caa33f04000015,
dateCreated: Tue, 26 Jun 2012 00:29:36 GMT,
rating: 0,
dislikes: 0,
likes: 0
},
{
itemId: 4fe90264e5caa33f04000012,
tagName: 'funny',
tagId: 4fe64219007e20e644000002,
_id: 4fe90270e5caa33f04000017,
dateCreated: Tue, 26 Jun 2012 00:29:36 GMT,
rating: 0,
dislikes: 0,
likes: 0
}],
viewCount: 0,
rating: 0,
type: 'image',
description: null,
title: 'dogggg',
dateCreated: Tue, 26 Jun 2012 00:29:24 GMT
}, ... ]
With the where clause, I get an empty array.
With a modern MongoDB greater than 3.2 you can use $lookup as an alternate to .populate() in most cases. This also has the advantage of actually doing the join "on the server" as opposed to what .populate() does which is actually "multiple queries" to "emulate" a join.
So .populate() is not really a "join" in the sense of how a relational database does it. The $lookup operator on the other hand, actually does the work on the server, and is more or less analogous to a "LEFT JOIN":
Item.aggregate(
[
{ "$lookup": {
"from": ItemTags.collection.name,
"localField": "tags",
"foreignField": "_id",
"as": "tags"
}},
{ "$unwind": "$tags" },
{ "$match": { "tags.tagName": { "$in": [ "funny", "politics" ] } } },
{ "$group": {
"_id": "$_id",
"dateCreated": { "$first": "$dateCreated" },
"title": { "$first": "$title" },
"description": { "$first": "$description" },
"tags": { "$push": "$tags" }
}}
],
function(err, result) {
// "tags" is now filtered by condition and "joined"
}
)
N.B. The .collection.name here actually evaluates to the "string" that is the actual name of the MongoDB collection as assigned to the model. Since mongoose "pluralizes" collection names by default and $lookup needs the actual MongoDB collection name as an argument ( since it's a server operation ), then this is a handy trick to use in mongoose code, as opposed to "hard coding" the collection name directly.
Whilst we could also use $filter on arrays to remove the unwanted items, this is actually the most efficient form due to Aggregation Pipeline Optimization for the special condition of as $lookup followed by both an $unwind and a $match condition.
This actually results in the three pipeline stages being rolled into one:
{ "$lookup" : {
"from" : "itemtags",
"as" : "tags",
"localField" : "tags",
"foreignField" : "_id",
"unwinding" : {
"preserveNullAndEmptyArrays" : false
},
"matching" : {
"tagName" : {
"$in" : [
"funny",
"politics"
]
}
}
}}
This is highly optimal as the actual operation "filters the collection to join first", then it returns the results and "unwinds" the array. Both methods are employed so the results do not break the BSON limit of 16MB, which is a constraint that the client does not have.
The only problem is that it seems "counter-intuitive" in some ways, particularly when you want the results in an array, but that is what the $group is for here, as it reconstructs to the original document form.
It's also unfortunate that we simply cannot at this time actually write $lookup in the same eventual syntax the server uses. IMHO, this is an oversight to be corrected. But for now, simply using the sequence will work and is the most viable option with the best performance and scalability.
Addendum - MongoDB 3.6 and upwards
Though the pattern shown here is fairly optimized due to how the other stages get rolled into the $lookup, it does have one failing in that the "LEFT JOIN" which is normally inherent to both $lookup and the actions of populate() is negated by the "optimal" usage of $unwind here which does not preserve empty arrays. You can add the preserveNullAndEmptyArrays option, but this negates the "optimized" sequence described above and essentially leaves all three stages intact which would normally be combined in the optimization.
MongoDB 3.6 expands with a "more expressive" form of $lookup allowing a "sub-pipeline" expression. Which not only meets the goal of retaining the "LEFT JOIN" but still allows an optimal query to reduce results returned and with a much simplified syntax:
Item.aggregate([
{ "$lookup": {
"from": ItemTags.collection.name,
"let": { "tags": "$tags" },
"pipeline": [
{ "$match": {
"tags": { "$in": [ "politics", "funny" ] },
"$expr": { "$in": [ "$_id", "$$tags" ] }
}}
]
}}
])
The $expr used in order to match the declared "local" value with the "foreign" value is actually what MongoDB does "internally" now with the original $lookup syntax. By expressing in this form we can tailor the initial $match expression within the "sub-pipeline" ourselves.
In fact, as a true "aggregation pipeline" you can do just about anything you can do with an aggregation pipeline within this "sub-pipeline" expression, including "nesting" the levels of $lookup to other related collections.
Further usage is a bit beyond the scope of what the question here asks, but in relation to even "nested population" then the new usage pattern of $lookup allows this to be much the same, and a "lot" more powerful in it's full usage.
Working Example
The following gives an example using a static method on the model. Once that static method is implemented the call simply becomes:
Item.lookup(
{
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
},
callback
)
Or enhancing to be a bit more modern even becomes:
let results = await Item.lookup({
path: 'tags',
query: { 'tagName' : { '$in': [ 'funny', 'politics' ] } }
})
Making it very similar to .populate() in structure, but it's actually doing the join on the server instead. For completeness, the usage here casts the returned data back to mongoose document instances at according to both the parent and child cases.
It's fairly trivial and easy to adapt or just use as is for most common cases.
N.B The use of async here is just for brevity of running the enclosed example. The actual implementation is free of this dependency.
const async = require('async'),
mongoose = require('mongoose'),
Schema = mongoose.Schema;
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
mongoose.connect('mongodb://localhost/looktest');
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
dateCreated: { type: Date, default: Date.now },
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
});
itemSchema.statics.lookup = function(opt,callback) {
let rel =
mongoose.model(this.schema.path(opt.path).caster.options.ref);
let group = { "$group": { } };
this.schema.eachPath(p =>
group.$group[p] = (p === "_id") ? "$_id" :
(p === opt.path) ? { "$push": `$${p}` } : { "$first": `$${p}` });
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": opt.path,
"localField": opt.path,
"foreignField": "_id"
}},
{ "$unwind": `$${opt.path}` },
{ "$match": opt.query },
group
];
this.aggregate(pipeline,(err,result) => {
if (err) callback(err);
result = result.map(m => {
m[opt.path] = m[opt.path].map(r => rel(r));
return this(m);
});
callback(err,result);
});
}
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
function log(body) {
console.log(JSON.stringify(body, undefined, 2))
}
async.series(
[
// Clean data
(callback) => async.each(mongoose.models,(model,callback) =>
model.remove({},callback),callback),
// Create tags and items
(callback) =>
async.waterfall(
[
(callback) =>
ItemTag.create([{ "tagName": "movies" }, { "tagName": "funny" }],
callback),
(tags, callback) =>
Item.create({ "title": "Something","description": "An item",
"tags": tags },callback)
],
callback
),
// Query with our static
(callback) =>
Item.lookup(
{
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
},
callback
)
],
(err,results) => {
if (err) throw err;
let result = results.pop();
log(result);
mongoose.disconnect();
}
)
Or a little more modern for Node 8.x and above with async/await and no additional dependencies:
const { Schema } = mongoose = require('mongoose');
const uri = 'mongodb://localhost/looktest';
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
dateCreated: { type: Date, default: Date.now },
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
});
itemSchema.statics.lookup = function(opt) {
let rel =
mongoose.model(this.schema.path(opt.path).caster.options.ref);
let group = { "$group": { } };
this.schema.eachPath(p =>
group.$group[p] = (p === "_id") ? "$_id" :
(p === opt.path) ? { "$push": `$${p}` } : { "$first": `$${p}` });
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": opt.path,
"localField": opt.path,
"foreignField": "_id"
}},
{ "$unwind": `$${opt.path}` },
{ "$match": opt.query },
group
];
return this.aggregate(pipeline).exec().then(r => r.map(m =>
this({ ...m, [opt.path]: m[opt.path].map(r => rel(r)) })
));
}
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
const log = body => console.log(JSON.stringify(body, undefined, 2));
(async function() {
try {
const conn = await mongoose.connect(uri);
// Clean data
await Promise.all(Object.entries(conn.models).map(([k,m]) => m.remove()));
// Create tags and items
const tags = await ItemTag.create(
["movies", "funny"].map(tagName =>({ tagName }))
);
const item = await Item.create({
"title": "Something",
"description": "An item",
tags
});
// Query with our static
const result = (await Item.lookup({
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
})).pop();
log(result);
mongoose.disconnect();
} catch (e) {
console.error(e);
} finally {
process.exit()
}
})()
And from MongoDB 3.6 and upward, even without the $unwind and $group building:
const { Schema, Types: { ObjectId } } = mongoose = require('mongoose');
const uri = 'mongodb://localhost/looktest';
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
},{ timestamps: true });
itemSchema.statics.lookup = function({ path, query }) {
let rel =
mongoose.model(this.schema.path(path).caster.options.ref);
// MongoDB 3.6 and up $lookup with sub-pipeline
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": path,
"let": { [path]: `$${path}` },
"pipeline": [
{ "$match": {
...query,
"$expr": { "$in": [ "$_id", `$$${path}` ] }
}}
]
}}
];
return this.aggregate(pipeline).exec().then(r => r.map(m =>
this({ ...m, [path]: m[path].map(r => rel(r)) })
));
};
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
const log = body => console.log(JSON.stringify(body, undefined, 2));
(async function() {
try {
const conn = await mongoose.connect(uri);
// Clean data
await Promise.all(Object.entries(conn.models).map(([k,m]) => m.remove()));
// Create tags and items
const tags = await ItemTag.insertMany(
["movies", "funny"].map(tagName => ({ tagName }))
);
const item = await Item.create({
"title": "Something",
"description": "An item",
tags
});
// Query with our static
let result = (await Item.lookup({
path: 'tags',
query: { 'tagName': { '$in': [ 'funny', 'politics' ] } }
})).pop();
log(result);
await mongoose.disconnect();
} catch(e) {
console.error(e)
} finally {
process.exit()
}
})()
what you are asking for isn't directly supported but can be achieved by adding another filter step after the query returns.
first, .populate( 'tags', null, { tagName: { $in: ['funny', 'politics'] } } ) is definitely what you need to do to filter the tags documents. then, after the query returns you'll need to manually filter out documents that don't have any tags docs that matched the populate criteria. something like:
query....
.exec(function(err, docs){
docs = docs.filter(function(doc){
return doc.tags.length;
})
// do stuff with docs
});
Try replacing
.populate('tags').where('tags.tagName').in(['funny', 'politics'])
by
.populate( 'tags', null, { tagName: { $in: ['funny', 'politics'] } } )
Update: Please take a look at the comments - this answer does not correctly match to the question, but maybe it answers other questions of users which came across (I think that because of the upvotes) so I will not delete this "answer":
First: I know this question is really outdated, but I searched for exactly this problem and this SO post was the Google entry #1. So I implemented the docs.filter version (accepted answer) but as I read in the mongoose v4.6.0 docs we can now simply use:
Item.find({}).populate({
path: 'tags',
match: { tagName: { $in: ['funny', 'politics'] }}
}).exec((err, items) => {
console.log(items.tags)
// contains only tags where tagName is 'funny' or 'politics'
})
Hope this helps future search machine users.
After having the same problem myself recently, I've come up with the following solution:
First, find all ItemTags where tagName is either 'funny' or 'politics' and return an array of ItemTag _ids.
Then, find Items which contain all ItemTag _ids in the tags array
ItemTag
.find({ tagName : { $in : ['funny','politics'] } })
.lean()
.distinct('_id')
.exec((err, itemTagIds) => {
if (err) { console.error(err); }
Item.find({ tag: { $all: itemTagIds} }, (err, items) => {
console.log(items); // Items filtered by tagName
});
});
#aaronheckmann 's answer worked for me but I had to replace return doc.tags.length; to return doc.tags != null; because that field contain null if it doesn't match with the conditions written inside populate.
So the final code:
query....
.exec(function(err, docs){
docs = docs.filter(function(doc){
return doc.tags != null;
})
// do stuff with docs
});

Categories

Resources