Mongoose embedded document updating - javascript

I have a problem with embedded document update.
My defined Schemas:
var Talk = new Schema({
title: {
type: String,
required: true
},
content: {
type: String,
required: true
},
date: {
type: Date,
required: true
},
comments: {
type: [Comments],
required: false
},
vote: {
type: [VoteOptions],
required: false
},
});
var VoteOptions = new Schema({
option: {
type: String,
required: true
},
count: {
type: Number,
required: false
}
});
Now I would like to update vote.count++, with given Talk id and VoteOption id. I have the following function to do the job:
function makeVote(req, res) {
Talk.findOne(req.params.id, function(err, talk) {
for (var i = 0; i < talk.vote.length; i++) {
if (talk.vote[i]._id == req.body.vote) {
talk.vote[i].count++;
}
}
talk.save(function(err) {
if (err) {
req.flash('error', 'Error: ' + err);
res.send('false');
} else {
res.send('true');
}
});
});
}
Everything executes, I get back the res.send('true'), but the value on count does not change.
When I did some console.log I saw that it changed the value, but the talk.save just doesn't save it in db.
Also I'm quite unhappy about the cycle just to find _id of embedded doc. In the mongoose documentation I read about talk.vote.id(my_id) but that gives me error of not having an id function.

When updating a Mixed type (which seems to be anything else than a basic type, so that also includes embedded documents), one has to call .markModified on the document. In this case, it would be:
talk.markModified("vote"); // mention that `talk.vote` has been modified
talk.save(function(err) {
// ...
});
Hope this helps someone in the future since I couldn't find the answer very quickly.
Reference:
... Mongoose loses the ability to auto detect/save those changes. To "tell" Mongoose that the value of a Mixed type has changed, call the .markModified(path) method of the document passing the path to the Mixed type you just changed.

It's because you are trying to save your talk object before the callback which increments count has been fired. Also, did you make sure to instantiate your Talk schema? eg:
var talk = new Talk();
However, if all you want to do is increment your count variable, mongo supports atomic, in-place updates which you may find useful:
talk.find( { _id : req.body.vote }, { $inc: { count : 1 } } );
have a look at:
http://www.mongodb.org/display/DOCS/Updating#Updating-%24inc

Related

Mongoose: only one unique boolean key should be true

I have two schema collections:
Campaigns:{
_id: "someGeneratedID"
//...a lot of key value pairs.
//then i have teams which is an, array of teams from the Team schema.
teams:teams: [{ type: mongoose.Schema.Types.ObjectId, ref: "Team" }],
}
Teams:{
campaignId: { type: mongoose.Schema.Types.ObjectId, ref: "Campaign" },
isDefault: { type: Boolean, default: false },
}
Now I would like that when I add teams to the collection, it should throw an error if there are more than 2 isDefault:true per campaignId.
so the following shouldn't be allowed:
teams:[
{
campaignId:1,
teamName:"John Doe"
isDefault:true,
}
{
campaignId:1,
teamName:"Jane Doe"
isDefault:true
}
]
I found this answer on SO:
teamSchema.index(
{ isDefault: 1 },
{
unique: true,
partialFilterExpression: { isDefault: true },
}
But couldn't manage to also check for the campaignId.
Thanks in advance.
ps: can you also provide an explanation for what's happening in the index method?
I think that the simplest way to approach this is via Mongoose's middleware pre('save'). This method will give you a way to check all the campaigns listed in the collection in order to check if any of the items is already set as default.
teamSchema.pre("save", async function (next) {
try {
if ((this.isNew || this.isModified("isDefault") && this.isDefault) {
const previousDefault = await mongoose.models["Team"].findOne({ isDefault: true, campaignId: this.campaignId });
if (previousDefault) {
throw new Error('There is already default team for this campaign');
}
}
next();
} catch (error) {
throw error;
}
});
This way, if any team, either new or already existing, is set as default for a given campaign, before its record is saved, the whole collection will be searched for any entry with isDefault already set to true. If at least one item is found, we will throw an error. If not, next() guarantees the save() method will go on.

Conditionally Update/Insert and Add To Array

I have .tsv file with some orders information. After remake into my script i got this.
[{"order":"5974842dfb458819244adbf7","name":"Сергей Климов","email":"wordkontent#gmail.com"},
{"order":"5974842dfb458819244adbf8","name":"Сушков А.В.","email":"mail#wwwcenter.ru"},
{"order":"5974842dfb458819244adbf9","name":"Виталий","email":"wawe2012#mail.ru"},
...
and so on
I have a scheema into mongoose.
var ClientSchema = mongoose.Schema({
name:{
type: String
},
email:{
type: String,
unique : true,
required: true,
index: true
},
forums:{
type: String
},
other:{
type: String
},
status:{
type: Number,
default: 3
},
subscribed:{
type: Boolean,
default: true
},
clienturl:{
type: String
},
orders:{
type: [String]
}
});
clienturl is an password 8 chars length, that generated by function.
module.exports.arrayClientSave = function(clientsArray,callback){
let newClientsArray = clientsArray
.map(function(x) {
var randomstring = Math.random().toString(36).slice(-8);
x.clienturl = randomstring;
return x;
});
console.log(newClientsArray);
Client.update( ??? , callback );
}
But i dont undestand how to make an update. Just if email already exsists push orders array, but not rewrite all other fields. But if email not exsists - save new user with clienturl and so on. Thanks!
Probably the best way to handle this is via .bulkWrite() which is a MongoDB method for sending "multiple operations" in a "single" request with a "single" response. This counters the need to control async functions in issue and response for each "looped" item.
module.exports.arrayClientSave = function(clientsArray,callback){
let newClientsArray = clientsArray
.map(x => {
var randomstring = Math.random().toString(36).slice(-8);
x.clienturl = randomstring;
return x;
});
console.log(newClientsArray);
let ops = newClientsArray.map( x => (
{ "updateOne": {
"filter": { "email": x.email },
"update": {
"$addToSet": { "orders": x.order },
"$setOnInsert": {
"name": x.name,
"clientUrl": x.clienturl
}
},
"upsert": true
}}
));
Client.bulkWrite(ops,callback);
};
The main idea there being that you use the "upsert" functionality of MongoDB to drive the "creation or update" functionality. Where the $addToSet only appends the "orders" property information to the array where not already present, and the $setOnInsert actually only takes effect when the action is actually an "upsert" and not applied when the action matches an existing document.
Also by applying this within .bulkWrite() this becomes a "single async call" when talking to a MongoDB server that supports it, and that being any version greater than or equal to MongoDB 2.6.
However the main point of the specific .bulkWrite() API, is that the API itself will "detect" if the server connected to actually supports "Bulk" operations. When it does not, this "downgrades" to individual "async" calls instead of one batch. But this is controlled by the "driver", and it will still interact with your code as if it were actually one request and response.
This means all the difficulty of dealing with the "async loop" is actually handled in the driver software itself. Being either negated by the supported method, or "emulated" in a way that makes it simple for your code to just use.

Mongoose/MongoDb ,how to validate an array of Ids against another model

I have 2 moongose Schema:
var Schema2 = new Schema({
creator : { type: String, ref: 'User'},
schema_name : [{ type: String}],
});
var Schema1 = new Schema({
creator : { type: String, ref: 'User'},
schema_ref : [{ type: String, ref: 'Schema2' }],
});
Would like to know which is the best practice when I create a new Schema1 check that every element of array schema_ref, have the same creator.
Because schema1 elements are added by client form and so i have to check that the schema_ref elements are owned by same User that send the form
You can try with either validator function, or with a simple 'save' middleware:
Schema1.pre('save', function(next) {
let owner;
for (let entry in this.schema_ref) {
if (!owner) {
owner = entry;
} else {
if (entry !== owner) {
return next(new Error("owner mismatch");
}
}
}
});
Also, your schema might not work as you expect it to, it looks like you actually need:
schema_ref: [{
type: {type: String},
ref: "User"
}]
Additionally, take a look at id-validator plugin, or some similar to that - it will, in addition to your validation, also check that all ref-type properties like this actually exist in the other (Users) collection.

Update embedded document mongoose

I'm looking for an easy way of updating an embedded document using mongoose without having to set each specific field manually. Looking at the accepted answer to this question, once you find the embedded document that you want to update you have to actually set each respective property and then save the parent. What I would prefer to do is pass in an update object and let MongoDB set the updates.
e.g. if I was updating a regular (non embedded) document I would do this:
models.User.findOneAndUpdate({_id: req.params.userId}, req.body.user, function(err, user) {
err ? resp.status(500).send(err) : user ? resp.send(user) : resp.status(404).send();
});
Here I don't actually have to go through each property in req.body.user and set the changes. I can't find a way of doing this kind of thing with sub documents as well ?
My Schema is as follows:
var UserSchema = BaseUserSchema.extend({
isActivated: { type: Boolean, required: true },
files: [FileSchema]
});
var FileSchema = new mongoose.Schema(
name: { type: String, required: true },
size: { type: Number, required: true },
type: { type: String, required: true },
});
And I'm trying to update a file based on user and file id.
Do I need to create a helper function to set the values, or is there a MongoDB way of doing this ?
Many thanks.
Well presuming that you have something that has you "filedata" in a variable, and of course the user _id that you are updating, then you wan't the $set operator:
var user = { /* The user information, at least the _id */
var filedata = { /* From somewhere with _id, name, size, type */ };
models.User.findOneAndUpdate(
{ "_id": user._id, "files._id": filedata._id },
{
"$set": {
"name": filedata.name,
"size": filedata.size,
"type": filedata.type
}
},
function(err,user) {
// Whatever in here such a message, but the update is already done.
}
);
Or really, just only $set the fields that you actually mean to "update" as long as you know which ones you mean. So if you only need to change the "size" then just set that for example.

MongoDB query on populated fields

I have models called "Activities" that I am querying for (using Mongoose). Their schema looks like this:
var activitySchema = new mongoose.Schema({
actor: {
type: mongoose.Schema.ObjectId,
ref: 'User',
required: true
},
recipient: {
type: mongoose.Schema.ObjectId,
ref: 'User'
},
timestamp: {
type: Date,
default: Date.now
},
activity: {
type: String,
required: true
},
event: {
type: mongoose.Schema.ObjectId,
ref: 'Event'
},
comment: {
type: mongoose.Schema.ObjectId,
ref: 'Comment'
}
});
When I query for them, I am populating the actor, recipient, event, and comment fields (all the references). After that, I also deep-populate the event field to get event.creator. Here is my code for the query:
var activityPopulateObj = [
{ path: 'event' },
{ path: 'event.creator' },
{ path: 'comment' },
{ path: 'actor' },
{ path: 'recipient' },
{ path: 'event.creator' }
],
eventPopulateObj = {
path: 'event.creator',
model: User
};
Activity.find({ $or: [{recipient: user._id}, {actor: {$in: user.subscriptions}}, {event: {$in: user.attending}}], actor: { $ne: user._id} })
.sort({ _id: -1 })
.populate(activityPopulateObj)
.exec(function(err, retrievedActivities) {
if(err || !retrievedActivities) {
deferred.reject(new Error("No events found."));
}
else {
User.populate(retrievedActivities, eventPopulateObj, function(err, data){
if(err) {
deferred.reject(err.message);
}
else {
deferred.resolve(retrievedActivities);
}
});
}
});
This is already a relatively complex query, but I need to do even more. If it hits the part of the $or statement that says {actor: {$in: user.subscriptions}}, I also need to make sure that the event's privacy field is equal to the string public. I tried using $elemMatch, but since the event has to be populated first, I couldn't query any of its fields. I need to achieve this same goal in multiple other queries, as well.
Is there any way for me to achieve this further filtering like I have described?
The answer is to change your schema.
You've fallen into the trap that many devs have before you when coming into document database development from a history of using relational databases: MongoDB is not a relational database and should not be treated like one.
You need to stop thinking about foreign keys and perfectly normalized data and instead, keep each document as self-contained as possible, thinking about how to best embed relevant associated data within your documents.
This doesn't mean you can't maintain associations as well. It might mean a structure like this, where you embed only necessary details, and query for the full record when needed:
var activitySchema = new mongoose.Schema({
event: {
_id: { type: ObjectId, ref: "Event" },
name: String,
private: String
},
// ... other fields
});
Rethinking your embed strategy will greatly simplify your queries and keep the query count to a minimum. populate will blow your count up quickly, and as your dataset grows this will very likely become a problem.
You can try below aggregation. Look at this answer: https://stackoverflow.com/a/49329687/12729769
And then, you can use fields from $addFields in your query. Like
{score: {$gte: 5}}
but since the event has to be populated first, I couldn't query any of its fields.
No can do. Mongodb cannot do joins. When you make a query, you can work with exactly one collection at a time. And FYI all those mongoose populates are additional, distinct database queries to load those records.
I don't have time to dive into the details of your schema and application, but most likely you will need to denormalize your data and store a copy of whatever event fields you need to join on in the primary collection.

Categories

Resources