When trying to .update() or .save() a row I'm getting this error:
Unhandled rejection Error: You attempted to save an instance with no primary key,
this is not allowed since it would result in a global update
I tried all 4 ways the docs uses as examples(with and without defining the attributes I wanna save), nothing worked.
This is my actual code for updating:
Sydney.databases.guilds.findOrCreate({
attributes: ['guildLocale'],
where: {
guildID: _guild.id,
},
defaults: {
guildID: _guild.id,
guildLocale: 'en_US',
guildPrefix: '?',
},
}).spread((guild, created) => {
guild.update({guildLocale: args[1]})
.then(() => console.log(7))
.catch((e) => throw e);
});
And this is the guild model:
let model = sequelize.define('guild', {
guildID: {
field: 'guild_id',
type: DataTypes.STRING,
primaryKey: true,
},
guildLocale: {
field: 'guild_locale',
type: DataTypes.STRING,
},
guildPrefix: {
field: 'guild_prefix',
type: DataTypes.STRING,
},
}, {tableName: 'guilds'});
What am I missing here?
I had the same problem. It occurs when you specify the attributes you want to fetch from the database without including the primary key in the attributes. And when you attempt to save, it will throw the following error:
Unhandled rejection Error: You attempted to save an instance with no primary key, this is not allowed since it would result in a global update
So the simple solution is to include the primary key in the attributes like this:
Sydney.databases.guilds.findOrCreate({
attributes: ['guildLocale', 'guildID'], // include guideID here!!
where: {
guildID: _guild.id,
},
defaults: {
guildID: _guild.id,
guildLocale: 'en_US',
guildPrefix: '?',
},
}).spread((guild, created) => {
guild.update({guildLocale: args[1]})
.then(() => console.log(7))
.catch((e) => throw e);
});
Ok, so the problem seems that was the attributes: ['guildLocale'] in the .findOrCreate() method. I can't tell why, to be honest, gonna read that doc again to be sure, but I'll leave this answer if another newbie is getting trouble with this, ;P...
Related
I have a redis database where i store has hset entries with
A MAC address called mid in the form XX:XX:XX:XX:XX:XX
A timestamp
A geographical position called position
A text payload called message
I would like to be able to index those objects by the mid, the timestamp and the position (I will never query from information about the message).
This is the code for the schema of the index,
await client.ft.create(
'idx:cits',
{
mid: {
type: SchemaFieldTypes.TEXT
},
timestamp: {
type: SchemaFieldTypes.NUMERIC,
sortable: true
},
position: {
type: SchemaFieldTypes.GEO
}
},
{
ON: 'HASH',
PREFIX: 'CITS'
}
)
And I insert new entries using
await client.hSet('CITS:19123123:0:0:00.00:5e:00:53:af', {
timestamp: 19123123,
position: '0,0',
mid: '00:00:5e:00:53:af',
message: 'payload'
})
I can perfectly query by timestamp and position both in the javascript code using
await client.ft.search('idx:cits', '#timestamp:[100 19123180] #position:[0 0 10 km]')
and in the redis-cli using
FT.SEARCH idx:cits "#timestamp:[100 19123180] #position:[0 0 10 km]"
But it does not work when querying by the mid field.
I have tried both
await client.ft.search('idx:cits', '#mid:"00:00:5e:00:53:af"')
and in redis-cli
FT.SEARCH idx:cits idx:cits '#mid:"00:00:5e:00:53:af"'
I have also tried exchanging the " and the ' as well as eliminating them to see if it worked without any result.
I have also tried storing the mac addresses as XX.XX.XX.XX.XX.XX instead of XX:XX:XX:XX:XX:XX and did not result either.
I have also seen that it will not work for 00:00:5e:00:53:af but instead it works for ff.ff.ff.ff.ff.ff.
I'm not sure why I am not able to query by the mid, I would really appreciate if someone could help me out.
Here is an example script
const { createClient, SchemaFieldTypes } = require('redis')
const client = createClient()
async function start(client) {
await client.connect()
try {
// We only want to sort by these 3 values
await client.ft.create(
'idx:cits',
{
mid: {
type: SchemaFieldTypes.TEXT
},
timestamp: {
type: SchemaFieldTypes.NUMERIC,
sortable: true
},
position: {
type: SchemaFieldTypes.GEO
}
},
{
ON: 'HASH',
PREFIX: 'CITS'
}
)
} catch (e) {
if (e.message === 'Index already exists') {
console.log('Skipping index creation as it already exists.')
} else {
console.error(e)
process.exit(1)
}
}
await client.hSet('CITS:19123123:0:0:00.00.5e.00.53.af', {
timestamp: 19123123,
position: '0,0',
mid: '00.00.5e.00.53.af',
message: 'payload'
})
await client.hSet('CITS:19123123:0.001:0.001:ff.ff.ff.ff.ff.ff', {
timestamp: 19123123,
position: '0.000001,0.000001',
mid: 'ff.ff.ff.ff.ff.ff',
message: 'payload'
})
const results = await client.ft.search('idx:cits', '#mid:00.00.5e.00.53.af')
console.log(results)
await client.quit()
}
start(client)
Thank you!
The TEXT type in RediSearch is intended for full-text search—i.e. text meant for humans to read and not meant for computers to parse. So, it removes and doesn't search for things like punctuation—i.e. periods, colons, commas, etc.—and common words like a, and, or the.
You want to use a TAG type instead. You can think of these like a tag cloud on a blog post. A TAG field should contain a comma-separated string of values—the tags. If there's just a single value, it's just a CSV of one.
To create an index that uses a TAG field, do this:
await client.ft.create('idx:cits', {
mid: { type: SchemaFieldTypes.TAG },
timestamp: { type: SchemaFieldTypes.NUMERIC, sortable: true },
position: { type: SchemaFieldTypes.GEO }
})
To search a TAG field, use curly braces. Like this:
await client.ft.search('idx:cits', '#mid:{00:00:5e:00:53:af}')
Hope this helps.
I've been using sequelize#3.4.1 for a while and now I need to update sequelize to the latest stable version (3.28.0)
TL;DR: How can I change validation error's structure other than manipulating the 'msg' attribute in the model definition?
The thing is, I use custom validation messages in all of my models, for example:
var Entity = sequelize.define("Entity", {
id: {
type: DataTypes.INTEGER.UNSIGNED,
allowNull: false,
primaryKey: true,
autoIncrement: true,
validate: {
isInt: {
msg: errorMessages.isInt()
}
}
},
name: {
type: DataTypes.STRING(128),
allowNull: false,
unique: {
name: "uniqueNamePerAccount",
args: [["Name", "Account ID"]],
msg: errorMessages.unique()
},
validate: {
notEmpty: {
msg: errorMessages.notEmpty()
}
}
},
accountId: {
type: DataTypes.INTEGER.UNSIGNED,
allowNull: false,
unique: {
name: "uniqueNamePerAccount",
args: [["Name", "Account ID"]],
msg: errorMessages.unique()
},
validate: {
isInt: {
msg: errorMessages.isInt()
}
}
}
}
Until sequelize#3.4.1 I used to add my own attributes to the validation error, so in fact the messages were objects that contains the message and other attributes like inner error code.
Here's an example of the messages:
function notEmpty() {
return {
innercode: innerErrorCodes.notEmpty,
data: {},
message: "cannot be empty"
};
}
function unique() {
return {
innercode: innerErrorCodes.unique,
data: {},
message: util.format("There is already an item that contains the exact same values of the following keys")
};
}
In fact, This how the Entity used to look like:
{"name":"SequelizeValidationError",
"message":"Validation error: cannot be empty",
"errors":[{"message":"cannot be empty",
"type":"Validation error",
"path":"name",
"value":{"innercode":704,
"data":{},
"message":"cannot be empty"},
"__raw":{"innercode":704,
"data":{},
"message":"cannot be empty"}}]}
So basically sequelize found my 'msg' attribute and appended it into the value of the error.
But now, in the latest version, It looks like sequelize throwing new Error(): (instance-validator.js line:268)
throw new Error(test.msg || 'Validation ' + validatorType + ' failed');
Instead of the error that was thrown in 3.4.1 version:
throw test.msg || 'Validation ' + validatorType + ' failed';
Because of that, the error object I set as a custom message is shown as 'Object object' (most probably after toString())
My question is: How can I influence the error's structure and add more attributes other than message?
P.s:
I have problems with the unique constraint custom message too, It's different because it's not considered as a validator.
How can I influence the unique constraint error structure?
Thank you very much!
I'm working on a small project and I have a solution to this problem, but it involves creating a new Schema with a reference to the new Schema in the old Schema. I would like to avoid this if at all possible because it will mean spending a couple hours rewriting some code and messing with tests.
The project is a forum site, and there are three main Schemas that comprise it (in addition to cursory Schemas for the forums, notifications, settings and the schemas for the user and the users activities). The Board Schema (contains a list of all forum sections if that wasn't apparent) Is a Schema that makes a reference to the Threads Schema so it can get the threads for each Board. My problem is in the Thread Schema.
var ThreadSchema = new mongoose.Schema({
... other unrelated Schema stuff...
comments: [{
created: {
type: Date,
default: Date.now
},
creator: {
type: mongoose.Schema.ObjectId,
required: true,
ref: 'User'
},
content: {
type: String,
required: true,
get: escapeProperty
},
likes: [{
type: mongoose.Schema.ObjectId,
required: false,
ref: 'User'
}],
liked: {
type: Boolean,
default: false
},
saved: [{
type: mongoose.Schema.ObjectId,
required: false,
ref: 'User'
}]
}]
});
blah blah blah.
I'm trying to pull for the users profile only the comments that that user has posted. The threads were easy, but comment data is not coming through. The request to the server goes through as successful, but I don't get any data back. This is what I am trying.
obj.profileComments = function (req, res) {
var userId = req.params.userId;
var criteria = {'comments.creator': userId};
Thread.find(criteria)
.populate('comments')
.populate('comments.creator')
.skip(parseInt(req.query.page) * System.config.settings.perPage)
.limit(System.config.settings.perPage + 1)
.exec(function (err, threads) {
if (err) {
return json.bad(err, res);
}
json.good({
records: threads
}, res);
});
};
This is a controller, and the json.bad and json.good are helpers that I have created and exported they basically are res.send.
var good = function (obj, res) {
res.send({
success: 1,
res: obj
});
};
and the bad is very similar, it just handles errors in an obj.res.errors and puts them into messages.
So now that that is all out of the way, I'm a little lost on what to do?
Is this something I should try to handle with a method in my Schema? It seems like I might have a little bit more luck that way. Thank you for any help.
I'm learning Mongoose. At the moment I did few nice things but I really don't understand exactly how Mongoose manage relationships between Schemas.
So, easy thing (I hope): I'm doing a classic exercise (by my self because I cannot find a good tutorial that create more than 2 Schemas) with 3 Schemas:
User, Post, Comment.
User can create many Post;
User can create many Comment;
Post belong to User.
Comment belong to User and Post.
I don't think it is something very hard uhu?
At the moment I can manage very well Relation between User and Post. My Unit test return exactly what I need, at the moment I'm using mongo-relation and I don't know if it is a good idea...
it('Use should create a Post', function(done) {
User.findOne({ email: 'test#email.com' }, function(err, user) {
var post = {
title: 'Post title',
message: 'Post message',
comments: []
};
user.posts.create(post, function(err, user, post) {
if (err) return done(err);
user.posts[0].should.equal(post._id);
post.author.should.equal(user._id);
// etc...
done();
});
});
});
The problem now is to create a comment.
I can not create a comment that refere to the Post and to the User together.
I did something like that and works but when I perform a remove it is removed only from the Post and not from the User.
So I think there is something I miss or I still need to study to enhance it.
it('User should add a Comment to a Post', function(done) {
User.findOne({ email: 'test#email.com' }, function(err, user) {
if (err) return done(err);
var comment = new Comment({
author: user._id,
message: 'Post comment'
});
Post.findOne({ title: 'Post title'}, function(err, post) {
if (err) return done(err);
post.comments.append(comment, function(err, comment) {
if (err) return done(err);
post.save(function(err) {
if (err) return done(err);
});
comment.author.should.equal(user._id);
post.comments.should.have.length(1);
// etc...
done();
});
});
});
});
As you can see the code is not very "nice to see" but it works fine in terms of creations.
The problem is when I remove a Comment. It seems like something is wrong.
Here is the Model relationship:
// User Schema
var userSchema = new mongoose.Schema({
// [...],
posts: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Post' }],
comments: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Comment' }],
});
// Post Schema
var postSchema = new mongoose.Schema({
author: { type: mongoose.Schema.ObjectId, ref: 'User', refPath: 'posts' },
title: String,
message: String,
comments: [{ type: mongoose.Schema.ObjectId, ref: 'Comment' }]
});
// Comment Schema
var commentSchema = new mongoose.Schema({
author: { type: mongoose.Schema.ObjectId, ref: 'User', refPath: 'comments' },
post: { type: mongoose.Schema.ObjectId, ref: 'Post', refPath: 'comments' },
message: String
});
I really hope in your help to understand all this.
It will be nice also a simple good tutorial about it.
I think you are misunderstanding subdocuments. The way you have your schema setup you are creating references to documents in other collections.
For example if you create a post, in the database it will look like this:
{
"author": ObjectId(123),
"title": "post title",
"message": "post message",
"comments": [ObjectId(456), ObjectId(789)]
}
Notice the "author" field just contains the ID of the author who created it. It does not actually contain the document itself.
When you read the document from the DB you can use the mongoose 'populate' functionality to also fetch the referred to document.
Ex (with populate):
Post
.findOne({ title: 'Post title'})
.populate('author', function(err, post) {
// this will print out the whole user object
console.log(post.author)
});
Ex (no populate):
Post
.findOne({ title: 'Post title'}, function(err, post) {
// this will print out the object ID
console.log(post.author)
});
Subdocuments:
You can actually nest data in the DB using subdocuments, the schema would look slightly different:
var postSchema = new mongoose.Schema({
author: { userSchema },
title: String,
message: String,
comments: [commentSchema]
});
When saving a post the user document would be nested inside the post:
{
"author": {
"name": "user name",
"email": "test#email.com"
...
},
"title": "post title",
"message": "post message",
"comments": [{
"message": "test",
...
}, {
"message": "test",
...
}]
}
Subdocuments can be useful in mongo, but probably not for this case because you would be duplicating all of the user data in every post.
Removing documents
When you issue a Comment.remove(id) the comment will be removed but it will not affect the other documents referring to it. So you will then have a Post and a User with a comment ID that does not exist. You need to manually clean up the comment ID from the other documents. You could use the mongoose pre remove event to do this. http://mongoosejs.com/docs/middleware.html
commentSchema.pre('remove', function (next) {
// this refers to the document being removed
var userId = this.author;
var postId = this.post;
User.findById(userId, function(err, user) {
// remove comment id from users.comments here;
Post.findById(postId, function(err, post) {
// remove comment id from post.comments;
next();
});
});
});
I have the following models in my Sailsjs application with a many-to-many relationship:
event.js:
attributes: {
title : { type: 'string', required: true },
description : { type: 'string', required: true },
location : { type: 'string', required: true },
maxMembers : { type: 'integer', required: true },
currentMembers : { collection: 'user', via: 'eventsAttending', dominant: true },
creator : { model: 'user', required: true },
invitations : { collection: 'invitation', via: 'eventID' },
tags : { collection: 'tag', via: 'taggedEvents', dominant: true },
lat : { type: 'float' },
lon : { type: 'float' },
},
tags.js:
attributes: {
tagName : { type: 'string', unique: true, required: true },
taggedEvents : { collection: 'event', via: 'tags' },
},
Based on the documentation, this relationship looks correct. I have the following method in tag.js that accepts an array of tag strings, and an event id, and is supposed to add or remove the tags that were passed in:
modifyTags: function (tags, eventId) {
var tagRecords = [];
_.forEach(tags, function(tag) {
Tag.findOrCreate({tagName: tag}, {tagName: tag}, function (error, result) {
tagRecords.push({id: result.id})
})
})
Event.findOneById(eventId).populate('tags').exec(function(error, event){
console.log(event)
var currentTags = event.tags;
console.log(currentTags)
delete currentTags.add;
delete currentTags.remove;
if (currentTags.length > 0) {
currentTags = _.pluck(currentTags, 'id');
}
var modifiedTags = _.pluck(tagRecords, 'id');
var tagsToAdd = _.difference(modifiedTags, currentTags);
var tagsToRemove = _.difference(currentTags, modifiedTags);
console.log('current', currentTags)
console.log('remove', tagsToRemove)
console.log('add', tagsToAdd)
if (tagsToAdd.length > 0) {
_.forEach(tagsToAdd, function (tag) {
event.tags.add(tag);
})
event.save(console.log)
}
if (tagsToRemove.length > 0) {
_.forEach(tagsToRemove, function (tagId) {
event.tags.remove(tagId)
})
event.save()
}
})
}
This is how the method is called from the event model:
afterCreate: function(record, next) {
Tag.modifyTags(tags, record.id)
next();
}
When I post to event/create, I get this result: http://pastebin.com/PMiqBbfR.
It looks as if the method call itself is looped over, rather than just the tagsToAdd or tagsToRemove array. Whats more confusing is that at the end, in the last log of the event, it looks like the event has the correct tags. When I then post to event/1, however, the tags array is empty. I've also tried saving immediately after each .add(), but still get similar results.
Ideally, I'd like to loop over both the tagsToAdd and tagsToRemove arrays, modify their ids in the model's collection, and then call .save() once on the model.
I have spent a ton of time trying to debug this, so any help would be greatly appreciated!
There are a few problems with your implementation, but the main issue is that you're treating certain methods--namely .save() and .findOrCreate as synchronous methods, when they are (like all Waterline methods) asynchronous, requiring a callback. So you're effectively running a bunch of code in parallel and not waiting for it to finish before returning.
Also, since it seems like what you're trying to do is replace the current event tags with this new list, the method you came up with is a bit over-engineered--you don't need to use event.tags.add and event.tags.remove. You can just use plain old update.
So you could probably rewrite the modifyTags method as:
modifyTags: function (tags, eventId, mainCb) {
// Asynchronously transform the `tags` array into an array of Tag records
async.map(tags, function(tag, cb) {
// For each tag, find or create a new record.
// Since the async.map `cb` argument expects a function with
// the standard (error, result) node signature, this will add
// the new (or existing) Tag instance to the resulting array.
// If an error occurs, async.map will exit early and call the
// "done()" function below
Tag.findOrCreate({tagName: tag}, {tagName: tag}, cb);
}, function done (err, tagRecords) {
if (err) {return mainCb(err);}
// Update the event with the new tags
Event.update({id: eventId}, {tags: tagRecords}).exec(mainCb);
});
}
See the full docs for async.map here.
If you wanted to stick with your implementation using .add and .remove, you would still want to use async.map, and do the rest of your logic in the done method. You don't need two .save calls; just do run all the .add and .remove code first, then do a single .save(mainCb) to finish it off.
And I don't know what you're trying to accomplish by deleting the .add and .remove methods from currentTags (which is a direct reference to event.tags), but it won't work and will just cause confusion later!