I have db structure like this:
datas
-data1
--name
--city
--date
--logs
---log1
---log2
---log3
-data2
--name
...
Now, I released putting 'logs' inside 'data' parent was a huge mistake because its user generated child and growing up fast (so much data under it) and causes delay on downloading 'data1' parent naturally.
Normally I am pulling 'data1' with this:
database().ref('datas/' + this.state.dataID).on('value', function(snapshot) {
... })
I hope i could explain my problem, I just basically ignore 'logs' child (I need name,city,date)
As there project started and users already using this, I need a proper way.
Is there a way to do this on firebase side ?
I don't think you'll have an easy way out of this one...
Queries are deep by default: they always return the entire subtree.
https://firebase.google.com/docs/firestore/rtdb-vs-firestore#querying
I can see only two options:
Migrate the logs to a different location (if it's really a huge amount of data, you could use something like BiqQuery https://cloud.google.com/bigquery or if it's events, you could store them in Google Analytics, it really depends on the volume and type of logs)
Attach multiple listeners instead of a single one (depending on the amount of entries that might be a viable interim solution):
let response={
name:null,
city:null,
date:null
}
const refs = ['name', 'city', 'date'].map(key=>database().ref(`datas/${this.state.dataID}/${key}')
refs.forEach(ref=>ref.on('value',snapshot=>{
})
Related
I am setting up InstantSearch icw Algolia for products of a webshop with the plain JavaScript implementation.
I am able to follow all the documentation, but I am running into the problem that we have prices specific to customer groups, and things like live stock information (need to do another API call for that).
These attributes I would like to ideally load after getting search results, from our own back-end.
I thought it would simply be a matter of manipulating the search results after receiving them and re-rendering only the front-end (without calling the Algolia search API again for new results).
This is a bit tricky. The transformItems functionality is possible, but I want to already display the results and load the other data into the hit templates after, not before displaying the hit results.
So I end up with a custom widget, and I can access and manipulate the results there, but here the problem is that I don’t know how to reflect these changes into the rendered templates.
The code of my widget (trying to set each stock number to 9) is as follows:
{
render: function(data) {
const hits = data.results.hits;
hits.forEach(hit => {
hit.stock = 9
});
}
}
The data is changed, but the generated html from the templates does not reflect any changes to the hit objects.
So how could I trigger a re-render after altering the hits data, without triggering a new search query?
I could not find a function to do this anywhere in the documentation.
Thanks!
There is the transformItems function in the hits widget that allows you to transform, remove or reorder items as you wish.
It is called before items displaying.
If I use your example, it would be something like this :
transformItems(items) {
return items.map(item => ({
...item,
stock: 9,
}));
}
Of course you can add you API call somewhere in that.
The documentation for the vanilla JS library :
https://www.algolia.com/doc/api-reference/widgets/hits/js/#widget-param-transformitems
I'm using Redux in a vanilla JS project. I have a bunch of small modular UI files and controllers and such. In those UI files I might have code like:
const ExampleForm = function (StoreInstance) {
return $('<form />', {
submit: () => {
StoreInstance.dispatch({
type: 'EXAMPLE_DISPATCH',
post: {
message: $TextareaComponent.val()
}
})
return false
}
})
}
The issue is I have a lot of simple view files like this and many of them are nested and I'm finding it to be ugly and error prone to have the store passed as a param to everything.
For example, I trimmed it for brevity but the form component has form element components such as a textarea. Currently I see two options of managing the Store:
Setting it to window when creating it in my entry file (index.js) and then just accessing Store globally. This seems the nicest, although not "best practice" and makes unit testing and server side rendering a bit harder.
Passing it to every component tediously. This is my example above. This I'd consider as "best practice" but it's pretty annoying to do for every file you make almost.
I'm wondering if there's any alternatives or tricks to passing the store instance. I'm leaning towards just making it global.
You could use the constructor pattern and create every view as new ConnectedView(). The ConnectedView would have a memoized instance of the store (this.store within the view), so it doesn't need to be global.
My Meteor client receives data from the server and stores it in minimongo. This data is guaranteed not to change during their session, so I don't need Meteor's reactivity. The static data just happens to arrive by that route; let's just take that as a given.
The data looks like this:
{_id: 'abc...', val: {...}}
On the client, is it more efficient for me to look up values using:
val_I_need = Collection.findOne({id})
or to create a JavaScript object:
data = {}
Collection.find().fetch().map((x) => {data[x._id] = x.val})
and use it for look ups:
val_I_need = data[id]
Is there a tipping point, either in terms of the size of the data or the number of look ups, where the more efficient method changes, or outweighs the initial cost of building the object?
FindOne may be more efficient on larger datasets because it looks up using cursors where _id is an indexed key while your find().fetch() approach requires to get all docs and then iterate manually by mapping.
Note, that findOne could also be replaced by .find({_id:desiredId}).fetch()[0](assuming it returns the desired doc).
More on this in the mongo documentation on query performance.
However, if it concerns only one object that is afterwards not reactively tracked, I would rather load it via a "findOne"-returning method from the server:
export const getOne = new ValidatedMethod({
name: "getOne",
validate(query) {
// validate query schema
// ...
},
run(query) {
// CHECK PERMISSIONS
// ...
return MyCollection.findOne(query);
});
This avoids using publications / subscriptions and thus minimongo for this collection on the current client template. Think about that pub/sub has already some reactivity initialized to observe the collection and thus eats up some computation somewhere.
My gut feeling is that you'll never hit a point where the performance gain of putting it in an object makes a noticeable difference.
It's more likely that your bottleneck will be in the pub/sub mechanism, as it can take a while to send all documents to the client.
You'll see a much more noticeable difference for a large dataset by retrieving the data using a Meteor method.
At which point you've got it in a plain old javascript object anyway and so end up with the small performance gain of native object lookups as well.
I am making a project with Meteor and I'm having some issues trying to get data out of mongodb in JavaScript. I have the following in a function:
console.log(Time.find({today: "Saturday"}).fetch());
In my publish.js file on the server side I have the following:
Meteor.publish("time", function () {
var currentUserId = this.userId;
return Time.find({user: currentUserId});
});
And In my subscriptions file I have the following:
Meteor.subscribe("time");
This function gets called later down in the code but it returns an empty array. If I run this code in my browsers console it returns an array with 2 objects in it, which is correct. This leads me wondering if I can use the .fetch() function from within my code? As if I leave off the .fetch() it returns what looks like the usual giant object. My real problem is I need the data in the form that .fetch() gives it to me in. I think it's because the function gets triggered before the data gets a chance to load in, as if I switch out the .fetch() for a .count() it returns 0.
Is there any way around this or a fix?
Where are you you running that console.log?
There are a couple fundementals here that I believe you may have glossed over.
1 Pub / Sub
This is how we get data from the server, when we subscribe to a publication i becomes active and begins to send data, this is neither instant or synchronous, (think of it more like turning on a hose pipe), so when you run your console.log, you may not yet have the data on the client.
2 Reactive contexts
One of the fundamental aspects to building anything in meteor is its reactivity. and it helps to start thinking in terms of reactive and non reactive contexts. A reactive context is one that re-runs each time the data it depends on changes. Using an autorun (Tracker.autorun or this.autorun insdie a template lifecycle callback) or a template helper are good examples. By placing it in a template helper it will re-run when the data is available.
Template.Whatever.helpers({
items: function() {
// ...do your find here.....
}
});
As items is a reactive context, depending on the collection data, it re-run when that changes, giving you access to the data when the client has them.
3 Retrieving Non Reactive Data
Alternatively it is also possible to retrieve data non-reactively by using Meteor.call with a meteor method, and then doing something with the result, in the callback to the Meteor.call. Depending on what you're doing, Meteor.wrapAsync may also be your friend here.
a simple example (out of my head, untested) :
// on the server
Meteor.methods({
gimmeStuff: function() {
return "here is your stuff kind sir!";
}
});
// on the client
Meteor.call('gimmeStuff', function(err, result) {
if (err || !result) {
console.log("there was an error or no result!");
return false;
}
console.log(result);
return result;
});
4 Its Unlikely that you actually need ithe .fetch()
If you're working with this in a template, you don't need a fetch.
If you want this to be non-reactive you don't need a fetch
As one of the commenters mentioned, a cursor is just a wrapper around that array, giving you convenient methods, and reactivity.
5 Go Back to the Begining
If you haven't already, I would highly recommend working through the tutorial on the meteor site carefully and thoroughly, as it covers all of the essentials you'll need to solve far more challenging problems than this, as well as, by way of example, teach you all of the fundamental mechanics to build great apps with Meteor.
I'am using MongoDB with a nodejs REST service which exposes my data stored inside. I have a question about how to interrogate my data which uses $ref.
Here is a sample of an Object which contains a reference to another object (detail) in anther collection :
{
"_id" : ObjectId("5962c7b53b6a02100a000085"),
"Title" : "test",
"detail" : {
"$ref" : "ObjDetail",
"$id" : ObjectId("5270c7b11f6a02100a000001")
},
"foo" : bar
}
Actually, using Node.js and mongodb module, I do the following :
db.collection("Obj").findOne({"_id" : new ObjectID("5962c7b53b6a02100a000085"},
function(err, item) {
db.collection(item.$ref).findOne({"_id" : item.$id}, function(err,subItem){
...
});
});
In fact I make 2 queries, and get 2 objects. It's a kind of "lazy loading" (not exactly but almost)
My question is simple : is it possible to retrieve the whole object graph in one query ?
Thank you
No, you can't.
To resolve DBRefs, your application must perform additional queries to return the referenced documents. Many drivers have helper methods that form the query for the DBRef automatically. The drivers do not automatically resolve DBRefs into documents.
From the MongoDB docs http://docs.mongodb.org/manual/reference/database-references/.
Is it possible to fetch parent object along with it's $ref using single MongoDB query?
No, it's not possible.
Mongo have no inner support for refs, so it up to your application to populate them (see Brett's answer).
But is it possible to fetch parent object with all its ref's with a single node.js command?
Yes, it's possible. You can do it with Mongoose. It has build-in ref's population support. You'll need to change your data model a little bit to make it work, but it's pretty much what you're looking for. Of course, to do so Mongoose will make the same two MongoDB queries that you did.
Answer of Vladimir is not still valid as the db.dereference method was deleted from MongoDB Nodejs API:
https://www.mongodb.com/blog/post/introducing-nodejs-mongodb-20-driver
The db instance object has been simplified. We've removed the following methods:
db.dereference due to db references being deprecated in the server
No, very few drivers for MongoDb include special support for a DBRef. There are two reasons:
MongoDb doesn't have any special commands to make retrieval of referenced documents possible. So, drivers that do add support are artificially populating the resulting objects.
The more, "bare metal" the API, the less it makes sense. In fact, as. MongoDb collections are schema-less, if the NodeJs driver brought back the primary document with all references realized, if the code then saved the document without breaking the references, it would result in an embedded subdocument. Of course, that would be a mess.
Unless your field values vary, I wouldn't bother with a DBRef type and would instead just store the ObjectId directly. As you can see, a DBRef really offers no benefit except to require lots of duplicate disk space for each reference, as a richer object must stored along with its type information. Either way, you should consider the potentially unnecessary overhead of storing a string containing the referenced collection's documents.
Many developers and MongoDb, Inc. have added an object document mapping layer on top of the existing base drivers. One popular option for MongoDb and Nodejs is Mongoose. As the MongoDb server has no real awareness of referenced documents, the responsibility of the references moves to the client. As it's more common to consistently reference a particular collection from a given document, Mongoose makes it possible to define the reference as a Schema. Mongoose is not schema-less.
If you accept having and using a Schema is useful, then Mongoose is definitely worth looking at. It can efficiently fetch a batch of related documents (from a single collection) from a set of documents. It always is using the native driver, but it generally does operations extremely efficiently and takes some of the drudgery out of more complex application architectures.
I would strongly suggest you have a look at the populate method (here) to see what it's capable of doing.
Demo /* Demo would be a Mongoose Model that you've defined */
.findById(theObjectId)
.populate('detail')
.exec(function (err, doc) {
if (err) return handleError(err);
// do something with the single doc that was returned
})
If instead of findById, which always returns a single document, find were used, with populate, all returned documents' details property will be populated automatically. It's smart too that it would request the same referenced documents multiple times.
If you don't use Mongoose, I'd suggest you consider a caching layer to avoid doing client side reference joins when possible and use the $in query operator to batch as much as possible.
I reach the desired result with next example:
collection.find({}, function (err, cursor) {
cursor.toArray(function (err, docs) {
var count = docs.length - 1;
for (i in docs) {
(function (docs, i) {
db.dereference(docs[i].ref, function(err, doc) {
docs[i].ref = doc;
if (i == count) {
(function (docs) {
console.log(docs);
})(docs);
}
});
})(docs, i)
}
});
});
Not sure that it solution is best of the best, but It is simplest solution that i found.