Here's the relevant code:
var Results = mongoose.model('Results', resultsSchema);
var results_array = [];
_.each(matches, function(match) {
var results = new Results({
id: match.match_id,
... // more attributes
});
results_array.push(results);
});
callback(results_array);
});
}
], function(results_array) {
results_array.insert(function(err) {
// error handling
Naturally, I get a No method found for the results_array. However I'm not sure what else to call the method on.
In other functions I'm passing through the equivalent of the results variable here, which is a mongoose object and has the insert method available.
How can I insert an array of documents here?
** Edit **
function(results_array) {
async.eachLimit(results_array, 20, function(result, callback) {
result.save(function(err) {
callback(err);
});
}, function(err) {
if (err) {
if (err.code == 11000) {
return res.status(409);
}
return next(err);
}
res.status(200).end();
});
});
So what's happening:
When I clear the collection, this works fine.
However when I resend this request I never get a response.
This is happening because I have my schema to not allow duplicates that are coming in from the JSON response. So when I resend the request, it gets the same data as the first request, and thus responds with an error. This is what I believe status code 409 deals with.
Is there a typo somewhere in my implementation?
Edit 2
Error code coming out:
{ [MongoError: insertDocument :: caused by :: 11000 E11000 duplicate key error index:
test.results.$_id_ dup key: { : 1931559 }]
name: 'MongoError',
code: 11000,
err: 'insertDocument :: caused by :: 11000 E11000 duplicate key error index:
test.results.$_id_ dup key: { : 1931559 }' }
So this is as expected.
Mongo is responding with a 11000 error, complaining that this is a duplicate key.
Edit 3
if (err.code == 11000) {
return res.status(409).end();
}
This seems to have fixed the problem. Is this a band-aid fix though?
You seem to be trying to insert various documents at once here. So you actually have a few options.
Firstly, there is no .insert() method in mongoose as this is replaced with other wrappers such as .save() and .create(). The most basic process here is to just call "save" on each document you have just created. Also employing the async library here to implement some flow control so everything just doesn't queue up:
async.eachLimit(results_array,20,function(result,callback) {
result.save(function(err) {
callback(err)
});
},function(err) {
// process when complete or on error
});
Another thing here is that .create() can just take a list of objects as it's arguments and simply inserts each one as the document is created:
Results.create(results_array,function(err) {
});
That would actually be with "raw" objects though as they are essentially all cast as a mongooose document first. You can ask for the documents back as additional arguments in the callback signature, but constructing that is likely overkill.
Either way those shake, the "async" form will process those in parallel and the "create" form will be in sequence, but they are both effectively issuing one "insert" to the database for each document that is created.
For true Bulk functionality you presently need to address the underlying driver methods, and the best place is with the Bulk Operations API:
mongoose.connection.on("open",function(err,conn) {
var bulk = Results.collection.initializeUnorderedBulkOp();
var count = 0;
async.eachSeries(results_array,function(result,callback) {
bulk.insert(result);
count++;
if ( count % 1000 == 0 ) {
bulk.execute(function(err,response) {
// maybe check response
bulk = Results.collection.initializeUnorderedBulkOp();
callback(err);
});
} else {
callback();
}
},function(err) {
// called when done
// Check if there are still writes queued
if ( count % 1000 != 0 )
bulk.execute(function(err,response) {
// maybe check response
});
});
});
Again the array here is raw objects rather than those cast as a mongoose document. There is no validation or other mongoose schema logic implemented here as this is just a basic driver method and does not know about such things.
While the array is processed in series, the above shows that a write operation will only actually be sent to the server once every 1000 entries processed or when the end is reached. So this truly does send everything to the server at once.
Unordered operations means that the err would normally not be set but rather the "response" document would contain any errors that might have occurred. If you want this to fail on the first error then it would be .initializeOrderedBulkOp() instead.
The care to take here is that you must be sure a connection is open before accessing these methods in this way. Mongoose looks after the connection with it's own methods so where a method such as .save() is reached in your code before the actual connection is made to the database it is "queued" in a sense awaiting this event.
So either make sure that some other "mongoose" operation has completed first or otherwise ensure that your application logic works within such a case where the connection is sure to be made. Simulated in this example by placing within the "connection open" event.
It depends on what you really want to do. Each case has it's uses, with of course the last being the fastest possible way to do this as there are limited "write" and "return result" conversations going back and forth with the server.
Related
Could you please help me with solving the following problem I would like to get values from the array returned by the socket which have similar ID's and pass them to a function.
First Received Request
{
"ID":"4567132",
"GroupType":"2"
"Name":"John Chris"
"SocialID":"68799"
"SecurityID":"18799-er7ree-781347a-71237n"
}
Second Received Request
{
"ID":"4567438",
"GroupType":"2"
"SocialID":"68799"
"SecurityID":"68789-4d37er-98c5347-e05d9b"
}
I would like to get the following Expected Results combined from the First and Second. But taking into consideration other request also coming in from the api.
{
"GroupType":"2"
"SocialID":"68799"
"PublicSecurityID":"18799-er7ree-781347a-71237n"
"PrivateSecurityID":"68789-4d37er-98c5347-e05d9b"
}
What make things complicated is that both first and second are received at the same time and maybe there can be other request the would be received by the socket. How can I group only those incoming messages that have a similar SocialID. Also how can I await for all the similar request from the socket and only execute once there are no more request.
Here is the code for receiving the socket
ReceiveSocket.on("socketMessage", (returnedData) => {
// code here
});
I'm not sure how to approach the problem logically could you explain and give the solution to handling the problem. Thanks!
What about something like this:
ReceiveSocket.on('socketMessage', returnedData => {
// objectsBySocialID always starts empty to make sure that
// sendRequest() only receieves new data. Notice that all previous
// data was already handled when it arrived, just like the
// current batch of data is being handled now.
const objectsBySocialID = {};
returnedData.forEach(obj => {
// If there's already one or more objects with the SocialID of the
// current object, then just add it to the array that contains the
// objects that have that SocialID.
if (objectsBySocialID[obj.SocialID]) {
objectsBySocialID[obj.SocialID].push(obj)
}
// Otherwise, initialize the array that will contain objects with
// this object's specific SocialID and assign it to the
// corresponding key.
else {
objectsBySocialID[obj.SocialID] = [obj]
}
// objectsBySocialID only contains the latest changes.
sendRequest(objectsBySocialID);
});
});
Using the data from your example, objectsBySocialID should look like this:
{
'68799': [
{
'ID':'4567132',
'GroupType':'2',
'Name':'John Chris',
'SocialID':'68799',
'SecurityID':'18799-er7ree-781347a-71237n',
},
{
'ID':'4567438',
'GroupType':'2' ,
'SocialID':'68799',
'SecurityID':'68789-4d37er-98c5347-e05d9b',
},
{
'GroupType':'2',
'SocialID':'68799',
'PublicSecurityID':'18799-er7ree-781347a-71237n',
'PrivateSecurityID':'68789-4d37er-98c5347-e05d9b',
},
],
}
and when you're done collecting and grouping data, you should be able to pass objectsBySocialID as an argument to the function that deals with the ordered data (sendRequest() in this example).
I'm building a web hook that will receive POST requests from Facebook. In each request, there's a facebookId field that will be used to insert a new record in database. facebookId should be unique in the database (meaning no two records should have the same facebookId).
The prototype code is something like this
postRequestHandler(req) {
const facebookId = req.body.facebookId;
if (!Meteor.users.findOne({ facebookId })) {
Meteor.users.insert({
facebookId,
// some other fields
})
}
}
The problem is, sometimes when there are many requests (yes, they have different meaning) containing the same new facebookId (which is not existed in the database) and they come almost at the same time. This will make the !Meteor.users.findOne({ facebookId }) check failed, and multiple records with same facebookId field will be inserted to the database. How do I prevent this ?
You should create a unique index on your facebookId field to have MongoDB guarantee uniqueness across your data and then add some exception handling code around your the callback that your insert call returns (documentation here). You will need to judge based on your business requirements what the right exception handling code should look like.
Meteor.users.insert({
facebookId,
// some other fields
}, function(error, id) {
if ( error ) {
/* add exception handling code here, e.g. return an error message to the client */
} else {
/* add code for successful case here, 'id' will be your newly inserted document's '_id' */
}
});
I have two models defined like this:
var OrganizationSchema = new mongoose.Schema({
users: [mongoose.Schema.Types.ObjectId]
});
and
var UserSchema = new mongoose.Schema({
organizations: [mongoose.Schema.types.ObjectId]
});
When a user want to join an organization I need to update both the organization collection and the user collection. I want to know what is the best way to achieve this ? Is it worth considering the case when one update request fail ? I'm currently doing something like this (where organization is a collection instance of the model Organization):
User.findByIdAndUpdate(req.userSession.userId, { $push: { organizations: organization.id } }, function (err){
if (err)
{
// server error
console.log(err);
}
else
{
organization.users.push(req.userSession.userId);
organization.save(function (err){
if (err)
{
// server error we need to cancel
User.findByIdAndUpdate(req.userSession.userId, { $pull: { organizations: organization.id } }, function (err){
if (err)
{
// we got a problem one collection updated and not the other one !!
console.log(err);
}
});
}
else
{
// success
}
});
}
});
The problem is: if my second update method fail I will end up with one collection updated and not the other ? Is there a way to make sure they are both updated ?
Well firstly, I would stay clear of that design. I would either reference or embed user in organisations and the other way around, not both of them at same time, so I wouldn't have problems like this(which happens every-time you duplicate data).
MongoDB doesn't have support for simultaneous updates, or transactions. So you are left to manage this in your code.
So yes, if the second update fails, then as you wrote your code you have to rollback, and if the rollback fails, you have to retry till it succeeds(though with exponential backoff probably). Keep in mind that might intefer with other requests(another user tries to save the same thing simultaneously). To handle that you have to give a unique to each entry in the array.
I am very new to mongodb and have a basic question that I am having trouble with. How do I get the ID field of a document that has already been created? I need the ID so i can update/add a new field to the document.
//newProfile is an object, one string it holds is called school
if(Schools.find({name: newProfile.school}).fetch().length != 1){
var school = {
name: newProfile.school
}
Meteor.call('newSchool', school);
//Method 1 (doesn't work)
var schoolDoc = Schools.findOne({name: newProfile.school});
Schools.update({_id: schoolDoc._id}, {$set: {enrolledStudents: Meteor.user()}});
//Method 2?
//Schools.update(_id: <what goes here?>, {$push: {enrolledStudents: Meteor.user()}});
}
else {
//Schools.update... <add users to an existing school>
}
I create a new school document if the listed school does not already exist. Schools need to hold an array/list of students (this is where i am having trouble). How do I add students to a NEW field (called enrolledStudents)?
Thanks!
I'm having some trouble understanding exactly what you're trying to do. Here's my analysis and understanding so far with a couple pointers thrown in:
if(Schools.find({name: newProfile.school}).fetch().length != 1){
this would be more efficient
if(Schools.find({name: new Profile.school}).count() != 1) {
Meteor.call('newSchool', school);
Not sure what you're doing here, unless you this will run asynchronously, meaning by the time the rest of this block of code has executed, chances are this Meteor.call() function has not completed on the server side.
//Method 1 (doesn't work)
var schoolDoc = Schools.findOne({name: newProfile.school});
Schools.update({_id: schoolDoc._id}, {$set: {enrolledStudents: Meteor.user()}});
Judging by the if statement at the top of your code, there is more than one school with this name in the database. So I'm unsure if the schoolDoc variable is the record you're after.
I believe you are having trouble because of the asynchronous nature of Meteor.call on the client.
Try doing something like this:
// include on both server and client
Meteor.methods({
newSchool: function (school) {
var newSchoolId,
currentUser = Meteor.user();
if (!currentUser) throw new Meteor.Error(403, 'Access denied');
// add some check here using the Meteor check/match function to ensure 'school'
// contains proper data
try {
school.enrolledStudents = [currentUser._id];
newSchoolId = Schools.insert(school);
return newSchoolId;
} catch (ex) {
// handle appropriately
}
}
});
// on client
var schoolExists = false;
if (Schools.findOne({name: newProfile.school})) {
schoolExists = true;
}
if (schoolExists) {
var school = {
name: newProfile.school
};
Meteor.call('newSchool', school, function (err, result) {
if (err) {
alert('An error occurred...');
} else {
// result is now the _id of the newly inserted record
}
})
} else {
}
Including the method on both the client and the server allows Meteor to do latency compensation and 'simulate' the insert immediately on the client without waiting for the server round-trip. But you could also just keep the method on the server-side.
You should do the enrolledStudents part on the server to prevent malicious users from messing with your data. Also, you probably don't want to actually be storing the entire user object in the enrolledStudents array, just the user _id.
For what you're trying to do, there is no need to get the _id. When you use update, just switch out the {_id: schoolDoc._id} with your query. Looks like using {name: newProfile.school} will work, assuming that the rest of your code does what you want it to do.
While that would work with the normal Mongo driver, I see that Meteor does not allow your update query to be anything but _id: Meteor throws throwIfSelectorIsNotId exception
First, make sure that you're pulling the right document, and you can try something like this:
var school_id = Schools.findOne({name: newProfile.school})._id;
Schools.update({_id: school_id}, { $push: { enrolledStudents: Meteor.user()}});
If that doesn't work, you'll have to do a little debugging to see what in particular about it isn't working.
So I'm trying to enter data into a mongodb collection with node. As far as I can tell I have access to the collection.
var collection = db.collection("whatsGoingOnEvents");
if(collection){
console.log("hitting stream");
var stream = collection.find({time: parsedObject.time, endTime: parsedObject.endTime, lon:parsedObject.lon,lat:parsedObject.lat}).stream();
console.log(stream);
stream.on("data",function(data){
console.log("data");
console.log(data);
if(!data){
collection.insert(parsedObject);
console.log("hitting insert");
}
});
stream.on("end",function(){
//dosomething
});
}
parsedObject may or may not have all of those fields - should it matter? I thought if the field was not there then collection.find() is just looking for time to be "undefined", which is still technically a value.
I never hit console.log("data") so I never insert documents. I've been trying to follow this link.
and so am not sure why the insert is not happening. I know that nothing is being added from db.collection.stats();, which tells me the size of the collection is 0.
Oh also, this is what I'm using to connect to Mongo-
var mongo = require('mongodb').MongoClient;
EDIT--
I tried the answer below - that resulted in this error-
lib/mongodb/connection/server.js:481
throw err;
^
Error: Cannot use a writeConcern without a provided callback
at insertAll (/Users/psanker/Google Drive/Coding/Javascript/WhatsGoingOn/node_modules/mongodb/lib/mongodb/collection.js:332:11)
at Collection.insert (/Users/psanker/Google Drive/Coding/Javascript/WhatsGoingOn/node_modules/mongodb/lib/mongodb/collection.js:91:3)
^The above occurred because I hadn't added a callback to the insert.
If your query doesn't match any records (which would seem logical, given that you write that the collection size is 0), the data event handler will never get called (because that will only be called when there's an actual result).
I think you're better off with using findOne and a regular callback:
collection.findOne({ params }, function(err, result) {
if (err)
throw err;
if (result === null) {
collection.insert(parsedObject, { w: 0 });
}
});
Or even an upsert.