Following is my user schema in user.js model -
var userSchema = new mongoose.Schema({
local: {
name: { type: String },
email : { type: String, require: true, unique: true },
password: { type: String, require:true },
},
facebook: {
id : { type: String },
token : { type: String },
email : { type: String },
name : { type: String }
}
});
var User = mongoose.model('User',userSchema);
module.exports = User;
This is how I am using it in my controller -
var user = require('./../models/user.js');
This is how I am saving it in the db -
user({'local.email' : req.body.email, 'local.password' : req.body.password}).save(function(err, result){
if(err)
res.send(err);
else {
console.log(result);
req.session.user = result;
res.send({"code":200,"message":"Record inserted successfully"});
}
});
Error -
{"name":"MongoError","code":11000,"err":"insertDocument :: caused by :: 11000 E11000 duplicate key error index: mydb.users.$email_1 dup key: { : null }"}
I checked the db collection and no such duplicate entry exists, let me know what I am doing wrong ?
FYI - req.body.email and req.body.password are fetching values.
I also checked this post but no help STACK LINK
If I removed completely then it inserts the document, otherwise it throws error "Duplicate" error even I have an entry in the local.email
The error message is saying that there's already a record with null as the email. In other words, you already have a user without an email address.
The relevant documentation for this:
If a document does not have a value for the indexed field in a unique index, the index will store a null value for this document. Because of the unique constraint, MongoDB will only permit one document that lacks the indexed field. If there is more than one document without a value for the indexed field or is missing the indexed field, the index build will fail with a duplicate key error.
You can combine the unique constraint with the sparse index to filter these null values from the unique index and avoid the error.
unique indexes
Sparse indexes only contain entries for documents that have the indexed field, even if the index field contains a null value.
In other words, a sparse index is ok with multiple documents all having null values.
sparse indexes
From comments:
Your error says that the key is named mydb.users.$email_1 which makes me suspect that you have an index on both users.email and users.local.email (The former being old and unused at the moment). Removing a field from a Mongoose model doesn't affect the database. Check with mydb.users.getIndexes() if this is the case and manually remove the unwanted index with mydb.users.dropIndex(<name>).
If you are still in your development environment, I would drop the entire db and start over with your new schema.
From the command line
➜ mongo
use dbName;
db.dropDatabase();
exit
I want to explain the answer/solution to this like I am explaining to a 5-year-old , so everyone can understand .
I have an app.I want people to register with their email,password and phone number .
In my MongoDB database , I want to identify people uniquely based on both their phone numbers and email - so this means that both the phone number and the email must be unique for every person.
However , there is a problem : I have realized that everyone has a phonenumber but not everyone has an email address .
Those that don`t have an email address have promised me that they will have an email address by next week. But I want them registered anyway - so I tell them to proceed registering their phonenumbers as they leave the email-input-field empty .
They do so .
My database NEEDS an unique email address field - but I have a lot of people with 'null' as their email address . So I go to my code and tell my database schema to allow empty/null email address fields which I will later fill in with email unique addresses when the people who promised to add their emails to their profiles next week .
So its now a win-win for everyone (but you ;-] ): the people register, I am happy to have their data ...and my database is happy because it is being used nicely ...but what about you ? I am yet to give you the code that made the schema .
Here is the code :
NOTE : The sparse property in email , is what tells my database to allow null values which will later be filled with unique values .
var userSchema = new mongoose.Schema({
local: {
name: { type: String },
email : { type: String, require: true, index:true, unique:true,sparse:true},
password: { type: String, require:true },
},
facebook: {
id : { type: String },
token : { type: String },
email : { type: String },
name : { type: String }
}
});
var User = mongoose.model('User',userSchema);
module.exports = User;
I hope I have explained it nicely .
Happy NodeJS coding / hacking!
In this situation, log in to Mongo find the index that you are not using anymore (in OP's case 'email'). Then select Drop Index
Check collection indexes.
I had that issue due to outdated indexes in collection for fields, which should be stored by different new path.
Mongoose adds index, when you specify field as unique.
Well basically this error is saying, that you had a unique index on a particular field for example: "email_address", so mongodb expects unique email address value for each document in the collection.
So let's say, earlier in your schema the unique index was not defined, and then you signed up 2 users with the same email address or with no email address (null value).
Later, you saw that there was a mistake. so you try to correct it by adding a unique index to the schema. But your collection already has duplicates, so the error message says that you can't insert a duplicate value again.
You essentially have three options:
Drop the collection
db.users.drop();
Find the document which has that value and delete it. Let's say the value was null, you can delete it using:
db.users.remove({ email_address: null });
Drop the Unique index:
db.users.dropIndex(indexName)
I Hope this helped :)
Edit: This solution still works in 2023 and you don't need to drop your collection or lose any data.
Here's how I solved same issue in September 2020. There is a super-fast and easy way from the mongodb atlas (cloud and desktop). Probably it was not that easy before? That is why I feel like I should write this answer in 2020.
First of all, I read above some suggestions of changing the field "unique" on the mongoose schema. If you came up with this error I assume you already changed your schema, but despite of that you got a 500 as your response, and notice this: specifying duplicated KEY!. If the problem was caused by schema configuration and assuming you have configurated a decent middleware to log mongo errors the response would be a 400.
Why this happens (at least the main reason)
Why is that? In my case was simple, that field on the schema it used to accept only unique values but I just changed it to accept repeated values. Mongodb creates indexes for fields with unique values in order to retrieve the data faster, so on the past mongo created that index for that field, and so even after setting "unique" property as "false" on schema, mongodb was still using that index, and treating it as it had to be unique.
How to solve it
Dropping that index. You can do it in 2 seconds from Mongo Atlas or executing it as a command on mongo shell. For the sack of simplicity I will show the first one for users that are not using mongo shell.
Go to your collection. By default you are on "Find" tab. Just select the next one on the right: "Indexes". You will see how there is still an index given to the same field is causing you trouble. Just click the button "Drop Index". Done.
So don't drop your database everytime this happens
I believe this is a better option than just dropping your entire database or even collection. Basically because this is why it works after dropping the entire collection. Because mongo is not going to set an index for that field if your first entry is using your new schema with "unique: false".
I faced similar issues ,
I Just clear the Indexes of particular fields then its works for me .
https://docs.mongodb.com/v3.2/reference/method/db.collection.dropIndexes/
This is my relavant experience:
In 'User' schema, I set 'name' as unique key and then ran some execution, which I think had set up the database structure.
Then I changed the unique key as 'username', and no longer passed 'name' value when I saved data to database. So the mongodb may automatically set the 'name' value of new record as null which is duplicate key. I tried the set 'name' key as not unique key {name: {unique: false, type: String}} in 'User' schema in order to override original setting. However, it did not work.
At last, I made my own solution:
Just set a random key value that will not likely be duplicate to 'name' key when you save your data record. Simply Math method '' + Math.random() + Math.random() makes a random string.
I had the same issue. Tried debugging different ways couldn't figure out. I tried dropping the collection and it worked fine after that. Although this is not a good solution if your collection has many documents. But if you are in the early state of development try dropping the collection.
db.users.drop();
I have solved my problem by this way.
Just go in your mongoDB account -> Atlast collection then drop your database column. Or go mongoDB compass then drop your database,
It happed sometimes when you have save something null inside database.
This is because there is already a collection with the same name with configuration..Just remove the collection from your mongodb through mongo shell and try again.
db.collectionName.remove()
now run your application it should work
I had a similar problem and I realized that by default mongo only supports one schema per collection. Either store your new schema in a different collection or delete the existing documents with the incompatible schema within the your current collection. Or find a way to have more than one schema per collection.
I got this same issue when I had the following configuration in my config/models.js
module.exports.models = {
connection: 'mongodb',
migrate: 'alter'
}
Changing migrate from 'alter' to 'safe' fixed it for me.
module.exports.models = {
connection: 'mongodb',
migrate: 'safe'
}
same issue after removing properties from a schema after first building some indexes on saving. removing property from schema leads to an null value for a non existing property, that still had an index. dropping index or starting with a new collection from scratch helps here.
note: the error message will lead you in that case. it has a path, that does not exist anymore. im my case the old path was ...$uuid_1 (this is an index!), but the new one is ....*priv.uuid_1
I have also faced this issue and I solved it.
This error shows that email is already present here. So you just need to remove this line from your Model for email attribute.
unique: true
This might be possible that even if it won't work. So just need to delete the collection from your MongoDB and restart your server.
It's not a big issue but beginner level developers as like me, we things what kind of error is this and finally we weast huge time for solve it.
Actually if you delete the db and create the db once again and after try to create the collection then it's will be work properly.
➜ mongo
use dbName;
db.dropDatabase();
exit
Drop you database, then it will work.
You can perform the following steps to drop your database
step 1 : Go to mongodb installation directory, default dir is "C:\Program Files\MongoDB\Server\4.2\bin"
step 2 : Start mongod.exe directly or using command prompt and minimize it.
step 3 : Start mongo.exe directly or using command prompt and run the following command
i) use yourDatabaseName (use show databases if you don't remember database name)
ii) db.dropDatabase()
This will remove your database.
Now you can insert your data, it won't show error, it will automatically add database and collection.
I had the same issue when i tried to modify the schema defined using mangoose. I think the issue is due to the reason that there are some underlying process done when creating a collection like describing the indices which are hidden from the user(at least in my case).So the best solution i found was to drop the entire collection and start again.
If you are in the early stages of development: Eliminate the collection. Otherwise: add this to each attribute that gives you error (Note: my English is not good, but I try to explain it)
index:true,
unique:true,
sparse:true
in my case, i just forgot to return res.status(400) after finding that user with req.email already exists
Go to your database and click on that particular collection and delete all the indexes except id.
I've read the Firebase docs on Stucturing Data. Data storage is cheap, but the user's time is not. We should optimize for get operations, and write in multiple places.
So then I might store a list node and a list-index node, with some duplicated data between the two, at very least the list name.
I'm using ES6 and promises in my javascript app to handle the async flow, mainly of fetching a ref key from firebase after the first data push.
let addIndexPromise = new Promise( (resolve, reject) => {
let newRef = ref.child('list-index').push(newItem);
resolve( newRef.key()); // ignore reject() for brevity
});
addIndexPromise.then( key => {
ref.child('list').child(key).set(newItem);
});
How do I make sure the data stays in sync in all places, knowing my app runs only on the client?
For sanity check, I set a setTimeout in my promise and shut my browser before it resolved, and indeed my database was no longer consistent, with an extra index saved without a corresponding list.
Any advice?
Great question. I know of three approaches to this, which I'll list below.
I'll take a slightly different example for this, mostly because it allows me to use more concrete terms in the explanation.
Say we have a chat application, where we store two entities: messages and users. In the screen where we show the messages, we also show the name of the user. So to minimize the number of reads, we store the name of the user with each chat message too.
users
so:209103
name: "Frank van Puffelen"
location: "San Francisco, CA"
questionCount: 12
so:3648524
name: "legolandbridge"
location: "London, Prague, Barcelona"
questionCount: 4
messages
-Jabhsay3487
message: "How to write denormalized data in Firebase"
user: so:3648524
username: "legolandbridge"
-Jabhsay3591
message: "Great question."
user: so:209103
username: "Frank van Puffelen"
-Jabhsay3595
message: "I know of three approaches, which I'll list below."
user: so:209103
username: "Frank van Puffelen"
So we store the primary copy of the user's profile in the users node. In the message we store the uid (so:209103 and so:3648524) so that we can look up the user. But we also store the user's name in the messages, so that we don't have to look this up for each user when we want to display a list of messages.
So now what happens when I go to the Profile page on the chat service and change my name from "Frank van Puffelen" to just "puf".
Transactional update
Performing a transactional update is the one that probably pops to mind of most developers initially. We always want the username in messages to match the name in the corresponding profile.
Using multipath writes (added on 20150925)
Since Firebase 2.3 (for JavaScript) and 2.4 (for Android and iOS), you can achieve atomic updates quite easily by using a single multi-path update:
function renameUser(ref, uid, name) {
var updates = {}; // all paths to be updated and their new values
updates['users/'+uid+'/name'] = name;
var query = ref.child('messages').orderByChild('user').equalTo(uid);
query.once('value', function(snapshot) {
snapshot.forEach(function(messageSnapshot) {
updates['messages/'+messageSnapshot.key()+'/username'] = name;
})
ref.update(updates);
});
}
This will send a single update command to Firebase that updates the user's name in their profile and in each message.
Previous atomic approach
So when the user change's the name in their profile:
var ref = new Firebase('https://mychat.firebaseio.com/');
var uid = "so:209103";
var nameInProfileRef = ref.child('users').child(uid).child('name');
nameInProfileRef.transaction(function(currentName) {
return "puf";
}, function(error, committed, snapshot) {
if (error) {
console.log('Transaction failed abnormally!', error);
} else if (!committed) {
console.log('Transaction aborted by our code.');
} else {
console.log('Name updated in profile, now update it in the messages');
var query = ref.child('messages').orderByChild('user').equalTo(uid);
query.on('child_added', function(messageSnapshot) {
messageSnapshot.ref().update({ username: "puf" });
});
}
console.log("Wilma's data: ", snapshot.val());
}, false /* don't apply the change locally */);
Pretty involved and the astute reader will notice that I cheat in the handling of the messages. First cheat is that I never call off for the listener, but I also don't use a transaction.
If we want to securely do this type of operation from the client, we'd need:
security rules that ensure the names in both places match. But the rules need to allow enough flexibility for them to temporarily be different while we're changing the name. So this turns into a pretty painful two-phase commit scheme.
change all username fields for messages by so:209103 to null (some magic value)
change the name of user so:209103 to 'puf'
change the username in every message by so:209103 that is null to puf.
that query requires an and of two conditions, which Firebase queries don't support. So we'll end up with an extra property uid_plus_name (with value so:209103_puf) that we can query on.
client-side code that handles all these transitions transactionally.
This type of approach makes my head hurt. And usually that means that I'm doing something wrong. But even if it's the right approach, with a head that hurts I'm way more likely to make coding mistakes. So I prefer to look for a simpler solution.
Eventual consistency
Update (20150925): Firebase released a feature to allow atomic writes to multiple paths. This works similar to approach below, but with a single command. See the updated section above to read how this works.
The second approach depends on splitting the user action ("I want to change my name to 'puf'") from the implications of that action ("We need to update the name in profile so:209103 and in every message that has user = so:209103).
I'd handle the rename in a script that we run on a server. The main method would be something like this:
function renameUser(ref, uid, name) {
ref.child('users').child(uid).update({ name: name });
var query = ref.child('messages').orderByChild('user').equalTo(uid);
query.once('value', function(snapshot) {
snapshot.forEach(function(messageSnapshot) {
messageSnapshot.update({ username: name });
})
});
}
Once again I take a few shortcuts here, such as using once('value' (which is in general a bad idea for optimal performance with Firebase). But overall the approach is simpler, at the cost of not having all data completely updated at the same time. But eventually the messages will all be updated to match the new value.
Not caring
The third approach is the simplest of all: in many cases you don't really have to update the duplicated data at all. In the example we've used here, you could say that each message recorded the name as I used it at that time. I didn't change my name until just now, so it makes sense that older messages show the name I used at that time. This applies in many cases where the secondary data is transactional in nature. It doesn't apply everywhere of course, but where it applies "not caring" is the simplest approach of all.
Summary
While the above are just broad descriptions of how you could solve this problem and they are definitely not complete, I find that each time I need to fan out duplicate data it comes back to one of these basic approaches.
To add to Franks great reply, I implemented the eventual consistency approach with a set of Firebase Cloud Functions. The functions get triggered whenever a primary value (eg. users name) gets changed, and then propagate the changes to the denormalized fields.
It is not as fast as a transaction, but for many cases it does not need to be.
Hi I am building an app using Meteor and need to update my email address. I am using the Meteor accounts package.
My form passes an email value into an accountDetails object, which I will pass into a method to update my profile (including my email):
Meteor.users.update({_id: this.userId},
{
$set: {
'emails.$.address': accountsDetail.email
}
});
This gives me the error:
Exception while invoking method 'saveAccountInfo' MongoError: The positional operator did not find the match needed from the query. Unexpanded update: emails.$.address
Here is my user schema:
{
"_id" : "12345",
"emails" : [
{
"address" : "abc123#gmail.com",
"verified" : false
}
Can someone help? Thank you in advance!
If you're sure the user has one address, which should be the case you can use emails.0.address instead of emails.$.address.
This should work for nearly all use cases. The exception is when there are many emails associated with a user. In this case:
If you are on the server & only on the server, you can use the positional operator to update a specific email, if there are multiple addresses. You need to, in this case specify the current email in the query portion of the update. I.e: {_id: this.userId, 'emails.$.address' : <current address> }
The $ positional update operator is not currently available on the mongo client in Meteor.
as each user is able to have multiple addresses (it´s an array - see http://docs.meteor.com/#/full/meteor_users for details) you need to specify which key you want to update (in this case the key is the address itself)
Meteor.users.update({_id: this.userId, "emails.address":"me#domain.com"},
$set:{'emails.$.address': accountsDetail.email}
});
If every user only has one email you could also think about dropping this one and inserting the new one. see http://docs.mongodb.org/manual/reference/operator/update/pop/ for details.
Hope this helps.
Regards,
René
I'm not able to use the node server debugger so I'm posting here to see if I can get a nudge in the right direction.
I am trying to allow multiple users to edit documents created by any of the users within their specific company. My code is below. Any help would be appreciated.
(Server)
ComponentsCollection.allow({
// Passing in the user object (has profile object {company: "1234"}
// Passing in document (has companyId field that is equal to "1234"
update: function(userObject, components) {
return ownsDocument(userObject, components);
}
});
(Server)
// check to ensure user editing document created/owned by the company
ownsDocument = function(userObject, doc) {
return userObject.profile.company === doc.companyId;
}
The error I'm getting is: Exception while invoking method '/components/update' TypeError: Cannot read property 'company' of undefined
I'm trying to be as secure as possible, though am doing some checks before presenting any data to the user, so I'm not sure if this additional check is necessary. Any advice on security for allowing multiple users to edit documents created by the company would be awesome. Thanks in advance. -Chris
Update (solution):
// check that the userId specified owns the documents
ownsDocument = function(userId, doc) {
// Gets the user form the userId being passed in
var userObject = Meteor.users.findOne(userId);
// Checking if the user is associated with the company that created the document being modified
// Returns true/false respectively
return doc.companyId === userObject.profile.companyId;
}
Looking at the docs, it looks like the first argument to the allow/deny functions is a user ID, not a user document. So you'll have to do Meteor.users.findOne(userId) to get to the document first.
Do keep in mind that users can write to their own profile subdocument, so if you don't disable that, users will be able to change their own company, allowing them to edit any post. You should move company outside of profile.
(If you can't use a proper debugger, old-fashioned console.log still works. Adding console.log(userObject) to ownsDocument probably would have revealed the solution.)
I'm using autosubscribe to get a list of 50 latest chat documents in minimongo. As more messages are posted the older messages are removed from minimongo by autosubscribe. How can I get autosubscribe to not remove certain messages that I mark as active?
I know that I can just manually separately subscribe to a list of "active" messages but that seems unnecessarily laborious. Thanks.
Edit: the active marking is client side only, each user gets to choose the messages that he cares about, it's something ephemeral. The user's marking a the message as the one he's replying so, so it shouldn't be suddenly removed.
You need to sort on the time (_id captures the order it was inserted hence time) as well as with status, both in descending order.
Server code:
Meteor.publish("messages", function () {
return Messages.find({}, {sort: {active: -1, _id:-1}, limit: 50});
});
In the publish function, sort on status.
Meteor.publish("messages", function () {
return Messages.find({}, {sort: {status: 1}, limit: 50});
});
Unless your implementation is limited to a single user being able to mark a line active, then the marking of the chat-line document needs to use the active users id.
This sadly leads to the need for separate subscription even if it 'seems unnecessarily laborious'
Another 'laborious way' would be to make a local client-only collection copy of the selected active messages.
Per client, maintain a session variable containing an array of marked doc IDs: Session.set('markedMessages', matchedDocs)
Within your publish statement, use a $in statement that will match doc id's within the session array, combine this an $or statement to leverage your existing query, limit/slice.
Meteor.publish("markedMessages", function () {
Messages.find({$or: [{ your_existing_query_goes_here },
{_id: { $in: Session.get('markedMessages')}} ] }).fetch()
})
;
Note, within your handlebars template, compare the message id against your markedMessages Session to identify if the message was marked by the user.