I'm trying to achieve the result:
Get tweets made by user which emails starts with 'a'.
var Tweet = mongoose.model('Tweets', new mongoose.Schema({
user_id : String,
text : String,
date : { type: Date, default: Date.now }
})
);
var User = mongoose.model('Users', new mongoose.Schema({
name : String,
email : String,
date : { type: Date, default: Date.now }
})
);
The queries look like:
app.get('/mongodb/tweets/users/get', function(req, res, next) {
var time = process.hrtime();
User.find({email : /^a/}, { _id:1 }, function(err, users) {
var dataArr = [];
for(o in users) { dataArr.push(users[o]._id); }
Tweet.find({user_id : { $in : dataArr }}, function(err, tweets) {
var diff = process.hrtime(time);
res.send({ seconds : diff[0], nanoseconds : diff[1], result: tweets.length});
});
});
});
I'm aware that I'm doing something wrong because the performance of those queries are kind of bad in comparison to the normal MySQL query.
Tweets : {"seconds":0,"nanoseconds":904058152,"result":4396}
Tweets (MySQL) : {"seconds":0,"nanoseconds":455872373,"result":4368}
I've also tried to use the populate() method but this occurred in even further lost in performance.
Any suggestions how to handle that in order to make it work faster? I'm looking for clean code that won't require me to do workaround (object to array conversion). What would be the solution to this kind of problem with some elegant and most correct approach?
#kevin
Thanks for the index tip, it helped with performance a lot and made it:
{"seconds":0,"nanoseconds":412133579,"result":4396}
Try adding an index to the user_id field. This will improve performance on collections with large numbers of records.
Related
I programming in react, mongodb, nodejs and expressjs. I have a problem that I cannot solve. I would like to use dynamic fields from $not on the server. For example, the server gets the column name from the front and it is supposed to return the number of documents where the text is different from an empty string, i.e. ''. I've tried to do something like this(code below), but it doesn't help.
const query = {};
query[type] = { $not: '' };
User.countDocuments(query, (err, data) => {
if (err) return res.json({ success: false, error: err });
return res.json({ success: true, data: data });
});
You are close, you probably were looking for $ne instead of $not. So changing it to
const query = {};
query[type] = { $ne: '' };
should fix the issue. This would find all documents where the dynamic type field does not equal ''. If you want to do the inverse, i.e. find all documents where the dynamic field equals an empty string, change it to:
query[type] = { $eq: '' };
I new to MongoDB/Mongoose, and work with a very large database (more than 25000 docs). I need to configure different queries: by fields, first 10 docs, one by id. The problem is with performance - the server responce is too slow (about 10-15 seconds).
Please tell me how to configure this so that the server response is fast?
Does it depend only on the schema settings, or it can also depend on other things, such as database connection parameters, or query parameters?
P.S. Queries should be by 'district' and 'locality'.
Thanks for any help!
Here is the schema:
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const houseSchema = new Schema({
code: {
type: String,
required: false
},
name: {
type: String,
required: true
},
district: {
type: String,
required: true
},
locality: {
type: String,
required: false
},
recountDate: {
type: Date,
default: Date.now
},
eventDate: {
type: Date,
default: Date.now
},
events: {
type: Array,
default: []
}
});
module.exports = mongoose.model('House', houseSchema);
Connection parameters:
mongoose.connect(
`mongodb+srv://${process.env.MONGO_USER}:${process.env.MONGO_PASSWORD}#cluster0-vuauc.mongodb.net/${process.env.MONGO_DB}?retryWrites=true&w=majority`,
{
useNewUrlParser: true,
useUnifiedTopology: true
}
).then(() => {
console.log('Connection to database established...')
app.listen(5555);
}).catch(err => {
console.log(err);
});
Queries are performed using Relay:
query {
viewer {
allPosts (first: 10) {
edges {
node {
id
code
district
locality
recountDate
eventDate
events
}
}
}
}
}
MongoDB is very fast in the execution of queries. But it also depends on how you write your query. For getting the first 10 documents and sort it descending order to the _id from a collection. You need to use limit & sort in your query.
db.collectionName.find({}).limit(10).sort({_id:-1})
Make sure it's not a connection issue. Try to run your query from MongoDB shell
mongo mongodb+srv://${process.env.MONGO_USER}:${process.env.MONGO_PASSWORD}#cluster0-vuauc.mongodb.net/${process.env.MONGO_DB}?retryWrites=true&w=majority
db.collection.find({condition}).limit(10)
If in MongoDB shell it responds faster than Mongoose:
There is an issue for Node.js driver which uses pure Javascript BSON serializer which is very slow to serialize from BSON to JSON.
Try to install bson-ext
The bson-ext module is an alternative BSON parser that is written in C++. It delivers better deserialization performance and similar or somewhat better serialization performance to the pure javascript parser.
https://mongodb.github.io/node-mongodb-native/3.5/installation-guide/installation-guide/#bson-ext-module
Use Projections to Return Only Necessary Data
When you need only a subset of fields from documents, you can achieve better performance by returning only the fields you need:
For example, if in your query to the posts collection, you need only the timestamp, title, author, and abstract fields, you would issue the following command:
db.posts.find( {}, { timestamp : 1 , title : 1 , author : 1 , abstract : 1} ).sort( { timestamp : -1 } ).limit(10)
You can read for Query optimize here
I am working on a MEAN stack application in which i defined a model using following schema:
var mappingSchema = new mongoose.Schema({
MainName: String,
Addr: String,
Mapping1: [Schema1],
Mappings2: [Schema2]
},
{collection : 'Mappings'}
);
I am displaying all this data on UI and Mapping1 & Mapping2 are displayed in the 2 tables where I can edit the values. What I am trying to do is once I update the values in table I should update them in database. I wrote put() api where I am getting these two updated mappings in the form of object but not able to update it in database. I tried using findAndModify() & findOneAndUpdate() but failed.
Here are the Schema1 & Schema2:
const Schema1 = new mongoose.Schema({
Name: String,
Variable: String
});
const Schema2 = new mongoose.Schema({
SName: String,
Provider: String
});
and my put api:
.put(function(req, res){
var query = {MainName: req.params.mainname};
var mapp = {Mapping1: req.params.mapping1, Mapping2: req.params.mapping2};
Mappings.findOneAndUpdate(
query,
{$set:mapp},
{},
function(err, object) {
if (err){
console.warn(err.message); // returns error if no matching object found
}else{
console.log(object);
}
});
});
Please suggest the best to way update those two arrays.
UPDATE :
I tried this
var mapp = {'Mapping2': req.params.mapping2};
Mappings.update( query ,
mapp ,
{ },
function (err, object) {
if (err || !object) {
console.log(err);
res.json({
status: 400,
message: "Unable to update" + err
});
} else {
return res.json(object);
}
});
what I got is
My array with size 3 is saved as String in Mapping2 array.
Please help. Stuck badly. :(
From Mongoose's documentation I believe there's no need to use $set. Just pass an object with the properties to update :
Mappings.findOneAndUpdate(
query,
mapp, // Object containing the keys to update
function(err, object) {...}
);
I want to set primary key for two fields in a collection in mongodb through mongoose. I know to set composite primary key in mongodb as
db.yourcollection.ensureIndex( { fieldname1: 1, fieldname2: 1 }, { unique: true } )
but am using mongoose to handle mongodb I don't know how to set composite primary key from mongoose
update
I used mySchema.index({ ColorScaleID: 1, UserName: 1}, { unique: true });
see my code
var mongoose = require('mongoose')
var uristring ='mongodb://localhost/fresh';
var mongoOptions = { db: { safe: true } };
// Connect to Database
mongoose.connect(uristring, mongoOptions, function (err, res) {
if (err) {
console.log ('ERROR connecting to: remote' + uristring + '. ' + err);
} else {
console.log ('Successfully connected to: remote' + uristring);
}
});
var mySchema = mongoose.Schema({
ColorScaleID:String,
UserName:String,
Range1:Number,
})
mySchema.index({ ColorScaleID: 1, UserName: 1}, { unique: true });
var freshtime= mongoose.model("FreshTimeColorScaleInfo",mySchema)
var myVar = new freshtime({
ColorScaleID:'red',
UserName:'tab',
Range1:10
})
myVar.save()
mongoose.connection.close();
When I execute this code for first time I see a line {"_id":...,ColorScaleID:'red',UserName:'tab',Range1:10 } in mongodb's fresh database. When I execute the same code for second time I see two same lines.
{"_id":...,ColorScaleID:'red',UserName:'tab',Range1:10 }
{"_id":...,ColorScaleID:'red',UserName:'tab',Range1:10 }
If composite primary key worked then it shouldn't allow me to insert same data for second time. what would be the problem?
The way that you have defined your schema is correct and will work. What you are probably experiencing is that the database has already been created and that collection probably already exists even though it might be empty. Mongoose won't retro fit the index.
As an experiment, set your database to a DB that does not exist. e.g.:
var uristring ='mongodb://localhost/randomname';
and then try running those two lines against this database and see if you can still insert those two documents.
Then compare the contents of the "system.indexes" collection in each of those collections. You should see that the randomname db has the composite index correctly set.
As everybody mentioned, you got to use index method of a Schema to set composite unique key.
But this isn't enough, try restarting MongoDB after that.
May be you can try this in your mongoose schema model,
const AppSchema1 = new Schema({
_id :{appId:String, name:String},
name : String
});
When i update this data, the deslon and deslat part is not inserted in the document.
var locationData = { update_time: new Date() ,
location: [
{curlon: req.payload.loclon , curlat: req.payload.loclat},
{deslon: req.payload.deslon , deslat: req.payload.deslat}
]};
the update
userLocationModel.update({uid: req.params.accesskey}, locationData, { upsert: true }, function (err, numberAffected, raw) {
//DO SOMETHING
});
I cannot understand why this is happining.
Here is the mongo document that gets inserted. The deslon and deslat are missing even if a new document is created.
{
_id: ObjectId("52f876d7dbe6f9ea80344fd4"),
location: [
{
curlon: 160,
curlat: 160,
_id: ObjectId("52f8788578aa340000e51673")
},
{
_id: ObjectId("52f8788578aa340000e51672")
}
],
uid: "testuser6",
update_time: ISODate("2014-02-10T06:58:13.790Z")
}
Also : Should I be using a structure like this if the document is updated frequently.
This is the mongoose model:
var userLocationSchema = mongoose.Schema({
uid: String, //same as the user access key
update_time: Date, //time stamp to validate, insert when updating. created by server.
location:[
{
curlon: Number, //current location in latitude and longitude <INDEX>
curlat: Number
},
{
deslon: Number, //destination in latitude and longitude <INDEX>
deslat: Number
}
]
});
I wish to update both of the elemets. I don't wan't to insert a new one. But even when I update a non existent document(ie- which results in the creation of a new one), the deslon and deslat are missing.
I have a real problem with this structure but, oh well.
Your Schema is wrong for doing this. Hence also the superfluous _id entries. To do what you want you need something like this:
var currentSchema = mongoose.Schema({
curlon: Number,
curlat: Number
});
var destSchema = mongoose.Schema({
destlon: Number,
destlat: Number
});
var userLocationSchema = mongoose.Schema({
uid: String,
update_time: Date,
location: [ ]
});
This is how mongoose expects you to do embedded documents. That will allow the update in your form you are using to work.
Also your logic on upsert is wrong as you have not included the new uid that is not found in the updated document part. You should take a look at $setOnInsert in the MongoDB documentation, or just live with updating it every time.
Actually, I'm just pointing you to how to separate the schema. As your usage in code stands location will accept anything by the above definition. See the mongoose docs on Embedded Documents for a more detailed usage.
This will work with your update statement as stands. However I would strongly urge you to re-think this schema structure, especially if you intend to do Geo-spatial work with the data. That's out of the scope of this question. Happy googling.
You have to tell mongo how to update your data. So add a simple $set to your update data:
var locationData = {
$set: {
update_time: new Date(),
location: [
{curlon: req.payload.loclon , curlat: req.payload.loclat},
{deslon: req.payload.deslon , deslat: req.payload.deslat}
]
};
EDIT:
If you do not want to exchange the location property as a whole, but insert a new item into the array, use:
var locationData = {
$set: {
update_time: new Date()
},
$push: {
location: [
{deslon: req.payload.deslon , deslat: req.payload.deslat}
]
};
What you should consider is, if it is a good idea to put the current location and the destinations in one array, just because they have the same properties (lon/lat). If for example, there is always one current location and zero to many destinations, you could put the current location into a separate property.
To modify a specific location within an array, you can address it via.
var index = 2, // this is an example
arrayElement = 'location.' + n,
locationData = { $set: {} };
locationData.$set[arrayElement] = {deslon: req.payload.deslon , deslat: req.payload.deslat};
userLocationModel.update({uid: req.params.accesskey}, locationData );
could it be that the intial collection was built with an other version of the schema? i.e. one that had only curlon and curlat? you may have to update the documents then to reflect the amended schema with the deslon and deslat properties.