Mongo DB - Why my Users.findOne is Undefined? - javascript

I'm working on Meteor, trying to find some values from Mongodb collection.
here is the code:
var sameLogins = Users.findOne({login: 'a'});
console.log(sameLogins);
But it's returning and "undefined".
But record exists in collection:
So, can anybody tell what I'm missing?
Also, in mongo console - everything is working fine:
I was looking in Publish/Subsribe stuff, but i'm using autopublish module yet.
Thank you!

I will leave the answer for this issue for new users having the same problem.
If you're using autopublish package then you should be aware that it's publishing the result of .find() for every collection.
But, Meteor.users.find(), be default, will return only _id and profile fields, so documents in your Meteor.users client collection will have these two fields only.
The most easy workaround for this would be to create your own publication (allUsers, for example) and in it to return those fields you need:
Server:
Meteor.publish('allUsers', () => {
// check for Meteor.userId() is omitted, put it here, if needed
return Meteor.users.find({}, { fields: { ... } });
});
Don't forget to subscribe to it:
Client:
Meteor.subscribe('allUsers');

Update for Meteor:
Right now you are storing a cursor in your variable sameLogins. In order to retrieve the results you want, you must actually execute this query by either calling fetch(). What is returned from findOne without fetch is essentially an object that you could use to iterate over and find mongoDB documents - (called a collection cursor). The cursor is not your result itself.
Calling fetch would like something like:
Users.findOne({login: 'a'}).fetch()

Related

Best way to batch create if not exists in firestore

I am working with a project where we create a bunch of entries in firestore based on results from an API endpoint we do not control, using a firestore cloud function. The API endpoint returns ids which we use for the document ids, but it does not include any timestamp information. Since we want to include a createdDate in our documents, we are using admin.firestore.Timestamp.now() to set the timestamp of the document.
On subsequent runs of the function, some of the documents will already exist so if we use batch.commit with create, it will fail since some of the documents exist. However, if we use batch.commit with update, we will either not be able to include a timestamp, or the current timestamp will be overwritten. As a final requirement, we do update these documents from a web application and set some properties like a state, so we can't limit the permissions on the documents to disallow update completely.
What would be the best way to achieve this?
I am currently using .create and have removed the batch, but I feel like this is less performant, and I occasionally do get the error Error: 4 DEADLINE_EXCEEDED on the firestore function.
First prize would be a batch that can create or update the documents, but does not edit the createdDate field. I'm also hoping to avoid reading the documents first to save a read, but I'd be happy to add it in if it's the best solution.
Thanks!
Current code is something like this:
const createDocPromise = docRef
.create(newDoc)
.then(() => {
// success, do nothing
})
.catch(err => {
if (
err.details &&
err.details.includes('Document already exists')
) {
// doc already exists, ignore error
} else {
console.error(`Error creating doc`, err);
}
});
This might not be possible with batched writes as set() will overwrite the existing document, update() will update the timestamp and create() will throw an error as you've mentioned. One workaround would be to use create() for each document with Promise.allSettled() that won't run catch() if any of the promise fails.
const results = [] // results from the API
const promises = results.map((r) => db.doc(`col/${r.id}`).create(r));
const newDocs = await Promise.allSettled(promises)
// either "fulfilled" or "rejected"
newDocs.forEach((result) => console.log(result.status))
If any documents exists already, create() will throw an error and status for that should be rejected. This way you won't have to read the document at first place.
Alternatively, you could store all the IDs in a single document or RTDB and filter out duplicates (this should only cost 1 read per invocation) and then add the data.
Since you prefer to keep the batch and you want to avoid reading the documents, a possible solution would be to store the timestamps in a field of type Array. So, you don't overwrite the createdDate field but save all the values corresponding to the different writes.
This way, when you read one of the documents you sort this array and take the oldest value: it is the very first timestamp that was saved and corresponds to the document creation.
This way you don't need any extra writes or extra reads.

What's remove() and save() mean in mongodb node.js when initializing one database

I am newly using node.js. I am reading the code of one app. The code below is to initialize the db, to load some question into the survey system I can't understand what's remove() and save() means here. Because I can't find any explanation about these two method. It seems mongoose isn't used after being connected. Could any one explain the usage of these methods?
Well, this is my understanding of this code, not sure to be correct. My TA tell me it should be run before server.js.
/**
* This is a utility script for dropping the questions table, and then
* re-populating it with new questions.
*/
// connect to the database
var mongoose = require('mongoose');
var configDB = require('./config/database.js');
mongoose.connect(configDB.url);
// load the schema for entries in the 'questions' table
var Question = require('./app/models/questions');
// here are the questions we'll load into the database. Field names don't
// quite match with the schema, but we'll be OK.
var questionlist = [
/*some question*/
];
// drop all data, and if the drop succeeds, run the insertion callback
Question.remove({}, function(err) {
// count successful inserts, and exit the process once the last insertion
// finishes (or any insertion fails)
var received = 0;
for (var i = 0; i < questionlist.length; ++i) {
var q = new Question();
/*some detail about defining q neglected*/
q.save(function(err, q) {
if (err) {
console.error(err);
process.exit();
}
received++;
if (received == questionlist.length)
process.exit();
});
}
});
To add some additional detail, mongoose is all based on using schemas and working with those to manipulate your data. In a mongodb database, you have collections, and each collection holds different kinds of data. When you're using mongoose, what's happening behind the scenes is every different Schema you work with maps to a mongodb collection. So when you're working with Question Schema in mongoose land, there's really some Question collection behind the scenes in the actual db that your working with. You might also have a Users Schema, which would act as an abstraction for some Users collection in the db, or maybe you could have a Products Schema, which again would map to some collection of products behind the scenes in the actual db.
As mentioned previously, when calling remove({}, callback) on the Questions Schema, you're telling mongoose to go find the Questions collection in the db and remove all entries, or documents as they're called in mongodb, that match a certain criteria. You specify that criteria in the object literal that is passed in as the first argument. So if the Questions Schema has some boolean field called correct and you wanted to delete all of the incorrect questions, you could say Question.remove({ correct: false }, callback). Also as mentioned previously, when passing an empty object to remove, your telling mongoose to remove ALL documents in the Schema, or collection rather. If you're not familiar with callback functions, pretty much the callback function says, "hey after you finish this async operation, go ahead and do this."
The save() function that is used here is a little different than how save() is used in the official mongodb driver, which is one reason why I don't prefer mongoose. But to explain, pretty much all save is doing here is you're creating this new question, referred to by the q variable, and when you call save() on that question object, you're telling mongoose to take that object and insert it as a new document into your Questions collection behind the scenes. So save here just means insert into the db. If you were using the official mongo driver, it would be db.getCollection('collectionName').insert({/* Object representing new document to insert */}).
And yes your TA is correct. This code will need to run before your server.js file. Whatever your server code does, I assume it's going to connect to your database.
I would encourage you to look at the mongoose API documentation. Long term though, the official mongodb driver might be your best bet.
Mongoose basically maps your MongoDB queries to JavaScript objects using schema.
remove() receives a selector, and callback function. Empty selector means, that all Questions will be affected.
After that a new Question object is created. I guess that you omitted some data being set on it. After that it's being saved back into MongoDB.
You can read more about that in the official documentation:
http://mongoosejs.com/docs/api.html#types-subdocument-js
remove query is use for removing all documents from collection and save is use for creating new document.
As per your code it seems like every time the script run it removes all the record from Question collection and then save new records for question from question list.

jQuery Deferred returns only last value in loop

So I'm trying to go through one Firebase database to find entries in the database matching a criteria. Therefore I'm using the deferred object of jQuery to handle the database calls.
Once I get a return value from this first database I want to get the user info from a second database for each of those values in the first db. Then the results are added to a JSON array
so its:
<search for value, find one>
<<<search other db for oher info>>>
<continue search for outer value>
But this only returns one value - although everything else is running fine (and the console logs all the info correct).
Here's the code:
function find(searchLocation, profileID) {
var requestUserData = {
data: []
};
var def = $.Deferred();
//This will be executed as long as there are elements in the database that match the criteria and that haven't been loaded yet (so it's a simple loop)
Ref.orderByChild("location").equalTo(searchLocation).on("child_added", function(snapshot) {
def.ressolve(snapshot.val().ID);
});
return def.promise();
};
I hope you guys have any ideas on what to do or how I could solve this. Thanks in advance!
Edit: upon further testing I discovered that this problem already exists in the outer loop - so only the first value is being returned. I think this is related to the posission of the resolve() method but I didn't find a posibility on how to change this behaviour.
Firebase is a real-time database. The events stream as changes occur at the server. You're attempting to take this real-time model and force it into CRUD strategy and do a GET operation on the data. A better solution would be to simply update the values in real-time as they are modified.
See AngularFire, ReactFire, or BackboneFire for an example of how you can do this with your favorite bindings framework.
To directly answer the question, if you want to retrieve a static snapshot of the data, you want to use once() callback with a value event, not a real-time stream from child_added:
Ref.orderByChild("location").equalTo(searchLocation).once("value", function(snapshot) {
def.resolve(snapshot.val());
});

Meteor collection trouble

First off, sorry for being a complete javascript noob, I am more of a PHP guy and am just testing out the meteor framework.
I am trying to loop through a collection of objects and trying to add a property from another collections as so :
Template.host.hosts = function() {
var hosts = Hosts.find();
hosts.forEach(function(host) {
host.lastPing = Pings.findOne({id: host.id}, {sort: {timestamp : -1}});
// This works fine
// console.log(host.lastPing.id);
});
for (host in hosts) {
// This results in "TypeError: Cannot read property 'id' of undefined"
console.log(host.lastPing.id);
}
return hosts;
};
I don't understand why the second console.log is not working.
I have tried searching but I don't know if the problem is specific to the way meteor handles collections or the way I should be adding properties to a javascript object or someting completely unrelated (scope etc...)
I have simplified my problem to try to understand what is happening, my real problem is obviously looping in a template as per :
{{#each hosts}}
{{this.lastPing.id}}
{{/each}}
Thanks
Three things:
MongoDB and Meteor ids are stored in _id rather than id.
In the context of your forEach method, host iterates through the query set returned by Hosts.find(), but it doesn't actually give you access to the documents themselves. Essentially, it's a copy of the information in the MongoDB rather than the document in the database.
The correct (and only) way to update the actual document is by using the Collection.update method:
Hosts.update({_id: host._id}, {$set: {lastPing: Pings.findOne({id: host.id}, {sort: {timestamp : -1}}) }});
(note that you can only update by _id on the client which is why that's what I've used here, whereas you can supply any query on the server.)
The hosts object is a cursor rather than an array. This means that when you use for host in hosts, you're actually iterating through the properties of the cursor object (which are inherited from the prototype) rather than an array of hosts, and none of them has an id property. One way to make this work is to fetch the query set and put it into hosts like this:
var hosts = Hosts.find().fetch();
Alternatively, you can stick with the cursor and use forEach again, although you'll either have to rewind it with hosts.rewind(), or repeat the line above to reset it to the start of the query set.
Hope that's helpful.

Meteor collection not updating subscription on client

I'm quite new on Meteor and Mongo and even if I don't want it, I need some relations.
I have a Collection called Feeds and another called UserFeeds where I have a feedid and a userid, and I publish the user feeds on the server like this:
Meteor.publish('feeds', function(){
return Feeds.find({_id:{$in:_.pluck(UserFeeds.find({user:this.userId}).fetch(),'feedid')}});
});
I find the user on UserFeeds, fetch it (returns an array) and pluck it to have only the feedid field, and then find those feeds on the Feeds collection.
And subscribe on the client like this:
Deps.autorun(function(){
Meteor.subscribe("feeds");
});
The problem is that when I add a new feed and a new userfeed the client doesn't receive the change, but when I refresh the page the new feed does appear.
Any idea of what I'm missing here?
Thanks.
I've run into this, too. It turns out publish functions on the server don't re-run reactively: if they return a Collection cursor, as you're doing (and as most publish functions do), then the publish function will run once and Meteor will store the cursor and send down updates only when the contents of the cursor change. The important thing here is that Meteor will not re-run the publish function, nor, therefore, the Collection.find(query), when query changes. If you want the publish function to re-run, then the way I've done it so far is to set up the publish function to receive an argument. That way the client, whose collections do update reactively, can re-subscribe reactively. The code would look something like:
// client
Meteor.subscribe('user_feeds');
Deps.autorun(function(){
var allFeeds = UserFeeds.find({user: Meteor.userId()}).fetch();
var feedIds = _.pluck(allFeeds,'feedid');
Meteor.subscribe('feeds',feedids);
});
// server
Meteor.publish('feeds',function(feedids) {
return Feeds.find({_id: {$in: feedids}});
});
I believe the Meteorite package publish-with-relations is designed to solve this problem, although I haven't used it.
EDIT: I believe the publish function will re-run when the userId changes, which means that you can have a server-side check to make sure the user is logged in before publishing sensitive data.
I think your problem is that .fetch() which you use here…
UserFeeds.find({user:this.userId}).fetch()
…removes the reactivity.
.fetch() returns an array instead of a cursor, and that array won't be reactive.
http://docs.meteor.com/#fetch
try this ...
Meteor.autosubscribe(function(){
Meteor.subscribe("feeds");
});
and in the Template JS ...
Template.templateName.feeds = function()
return Feeds.find() # or any specific call
};
in the HTML ...
{{#each feeds}}
do some stuff
{{else}}
no feed
{{/each}}
You can use the reactive-publish package (I am one of authors). It allows you to create publish endpoints which depend on the result of another query. In your case, query on UserFeeds.
Meteor.publish('feeds', function () {
this.autorun(function (computation) {
var feeds = _.pluck(UserFeeds.find({user: this.userId}, {fields: {feedid: 1}}).fetch(), 'feedid');
return Feeds.find({_id: {$in: feeds}});
});
});
The important part is that you limit the UserFeeds fields only to feedid to make sure autorun does not rerun when some other field changes in UserFeeds, a field you do not care about.

Categories

Resources