Insert documents in different collections by calling one API - javascript

Im doing a project where i call one api, this given API service will handle all the given data and split it to different collections.
Given example
service.js
async function create(params, origin) {
const { param1 , param2 , param3 } = params
const collection1 = new db.Collection1(param1)
await collection1.save()
const collection2 = new db.Collection2(param2)
await collection2.save()
const collection3 = new db.Collection3(param3)
await collection3.save()
}
My questions are:
What is the best practice? Should I create a general model schema that groups all the collections with the parameters, "param1,param2,param3", and insert it in the collection then call another function that splits all the values into Collection1,Collection2....
How can i handle if one collection insert throws an error, to delete all the previously inserted collections ? For example the values of param2 is not valid, but param1 and param3 are, how can i handle to delete the collection 1 and 3 and throw an error ? Whats the best way to achieve this?
Generally all the above examples are simplified, we are talking about at least more than 10 collections and more than 15 parameters.

Basically you are talking about having multiple route handlers for a single path.
Generally you should handle server-side validation & sanitation on the input data before inserting into the db and throw errors right away if rules don't match, so having to delete previous 2 collection inserts in case the 3rd one fails is not needed anymore.
Check out express-validator middleware for this aspect.
Ideally you should have one route handler per path for several reasons but I think the most common one is ease of maintenance and debugging(conceptually like separation of concerns). It's easier to execute a sequence of requests to different paths and eventually await the response from the first request to be used in the next request and so on(if that's the case). In my opinion you're just adding a layer of complexity not needed.
It might work if you develop alone as a full-stack, but if you have a team where you do the back-end and somebody else does the requests from the front-end and he encounters a problem it'll be much harder for him to tell you which path => handler failed, because you're basically hiding multiple handlers into a single one path => [handler1, halder2, handler3]. If you think about it, this behaviour is causing your second question.
Another thing is, what do you do if somebody needs to operate just a single insert from that array of inserts you're trying to do? You'll probably end up creating separate paths/routes meaning you are copying existing code.
I think it's better for chaining/sequencing different request from the front-end. It's much better and elegant, follows DRY, validation and sanitation is indeed easier to code and it gives the consumer of your api freedom of composition.

Related

Firebase realtime database don't retrieve specified child

I have db structure like this:
datas
-data1
--name
--city
--date
--logs
---log1
---log2
---log3
-data2
--name
...
Now, I released putting 'logs' inside 'data' parent was a huge mistake because its user generated child and growing up fast (so much data under it) and causes delay on downloading 'data1' parent naturally.
Normally I am pulling 'data1' with this:
database().ref('datas/' + this.state.dataID).on('value', function(snapshot) {
... })
I hope i could explain my problem, I just basically ignore 'logs' child (I need name,city,date)
As there project started and users already using this, I need a proper way.
Is there a way to do this on firebase side ?
I don't think you'll have an easy way out of this one...
Queries are deep by default: they always return the entire subtree.
https://firebase.google.com/docs/firestore/rtdb-vs-firestore#querying
I can see only two options:
Migrate the logs to a different location (if it's really a huge amount of data, you could use something like BiqQuery https://cloud.google.com/bigquery or if it's events, you could store them in Google Analytics, it really depends on the volume and type of logs)
Attach multiple listeners instead of a single one (depending on the amount of entries that might be a viable interim solution):
let response={
name:null,
city:null,
date:null
}
const refs = ['name', 'city', 'date'].map(key=>database().ref(`datas/${this.state.dataID}/${key}')
refs.forEach(ref=>ref.on('value',snapshot=>{
})

Hack news API - fetch all news

i have a problem with https://github.com/HackerNews/API.
I need fetch all best news in Angular 7, on 30 news in first page(title, author...). How can i sent get request with next api?
This API show all Id of best story:
https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty
This api show example of one story:
https://hacker-news.firebaseio.com/v0/item/8863.json?print=pretty
I try this:
loadItem(){
this.http.get(`https://hacker-
news.firebaseio.com/v0/item/${this.id}.json?print=pretty`).subscribe(data => {
this.items.push(data as any[]);
})
}
loadBestItems(){
this.http.get('https://hacker-news.firebaseio.com/v0/beststories.json?
print=pretty').subscribe(data => {
this.bestItems.push(data as any[]);
})
}
I need 30 best news on first page
This is a bit of a loaded question, but I think we can break it down into three main questions:
1. How do you limit the number of stories returned by the hacker news api?
Since the hacker-news data is exposed through the firebase API, lets refer to the firebase docs. As indicated here, we can use the limitToFirst and orderBy options together to limit the number of results. We can simply order by the key, so your request URL would end up looking something like this:
'https://hacker-news.firebaseio.com/v0/beststories.json?
print=pretty&orderBy="$key"&limitToFirst=30'
2. How do you chain HTTP requests in Angular (make a second request that depends on the result of the first)?
This can be achieved with the mergeMap rxjs operator. This operator allows you to map the values emitted by an observable to another observable. To simplify things, imagine your initial request was to only return a single id. We could then use mergeMap to map the id to a request for the full item.
If that endpoint existed at the path beststory.json, it would look something like this.like this:
this.http.get('https://hack...v0/beststory.json').pipe(
mergeMap((id) => this.http.get(`https://hack.../v0/item/${id}`))
).subscribe((data) => {
console.log('Story: ', data);
}
Since you need to map to multiple requests, however, we will need to introduce another operator, outlined in question 3.
3. How do you make multiple HTTP requests at the same time (make a request for each item in a list)?
This can be achieved with the forkJoin rxjs operator. This operator takes an array of observables, and emits an array of their values once they are all complete. In the context of your problem, the input is an array of requests (one for each id in the initial request), and the output would be a list of items. To simplify things again, lets assume you already have an array of ids sitting around. Issuing requests for each item in the list would look something like this:
let ids = [1, 2,...];
forkJoin(ids.map((id) => this.http.get(`https://hack.../v0/item/${id}`)).subscribe((stories) => {
console.log('Stories:', stories);
});
Putting it all together
Now that we know how to map the result of a request to another observable with mergeMap, and we know how to combine the results of multiple observables into one with forkJoin, we can use them together to achieve what you're looking for:
this.http.get('https://hack....v0/beststories.json?orderBy="$key"&limitToFirst=30').pipe(
mergeMap((ids) => forkJoin(ids.map((id) => this.http.get(`https://hack...v0/item/${id}`)))),
).subscribe((stories) => {
console.log('Stories:', stories);
});
Note that in the code snippets I have excluded part of the url and unrelated query params

JavaScript Object vs minimongo efficiency

My Meteor client receives data from the server and stores it in minimongo. This data is guaranteed not to change during their session, so I don't need Meteor's reactivity. The static data just happens to arrive by that route; let's just take that as a given.
The data looks like this:
{_id: 'abc...', val: {...}}
On the client, is it more efficient for me to look up values using:
val_I_need = Collection.findOne({id})
or to create a JavaScript object:
data = {}
Collection.find().fetch().map((x) => {data[x._id] = x.val})
and use it for look ups:
val_I_need = data[id]
Is there a tipping point, either in terms of the size of the data or the number of look ups, where the more efficient method changes, or outweighs the initial cost of building the object?
FindOne may be more efficient on larger datasets because it looks up using cursors where _id is an indexed key while your find().fetch() approach requires to get all docs and then iterate manually by mapping.
Note, that findOne could also be replaced by .find({_id:desiredId}).fetch()[0](assuming it returns the desired doc).
More on this in the mongo documentation on query performance.
However, if it concerns only one object that is afterwards not reactively tracked, I would rather load it via a "findOne"-returning method from the server:
export const getOne = new ValidatedMethod({
name: "getOne",
validate(query) {
// validate query schema
// ...
},
run(query) {
// CHECK PERMISSIONS
// ...
return MyCollection.findOne(query);
});
This avoids using publications / subscriptions and thus minimongo for this collection on the current client template. Think about that pub/sub has already some reactivity initialized to observe the collection and thus eats up some computation somewhere.
My gut feeling is that you'll never hit a point where the performance gain of putting it in an object makes a noticeable difference.
It's more likely that your bottleneck will be in the pub/sub mechanism, as it can take a while to send all documents to the client.
You'll see a much more noticeable difference for a large dataset by retrieving the data using a Meteor method.
At which point you've got it in a plain old javascript object anyway and so end up with the small performance gain of native object lookups as well.

Using the .find().fetch() from within a function in Meteor

I am making a project with Meteor and I'm having some issues trying to get data out of mongodb in JavaScript. I have the following in a function:
console.log(Time.find({today: "Saturday"}).fetch());
In my publish.js file on the server side I have the following:
Meteor.publish("time", function () {
var currentUserId = this.userId;
return Time.find({user: currentUserId});
});
And In my subscriptions file I have the following:
Meteor.subscribe("time");
This function gets called later down in the code but it returns an empty array. If I run this code in my browsers console it returns an array with 2 objects in it, which is correct. This leads me wondering if I can use the .fetch() function from within my code? As if I leave off the .fetch() it returns what looks like the usual giant object. My real problem is I need the data in the form that .fetch() gives it to me in. I think it's because the function gets triggered before the data gets a chance to load in, as if I switch out the .fetch() for a .count() it returns 0.
Is there any way around this or a fix?
Where are you you running that console.log?
There are a couple fundementals here that I believe you may have glossed over.
1 Pub / Sub
This is how we get data from the server, when we subscribe to a publication i becomes active and begins to send data, this is neither instant or synchronous, (think of it more like turning on a hose pipe), so when you run your console.log, you may not yet have the data on the client.
2 Reactive contexts
One of the fundamental aspects to building anything in meteor is its reactivity. and it helps to start thinking in terms of reactive and non reactive contexts. A reactive context is one that re-runs each time the data it depends on changes. Using an autorun (Tracker.autorun or this.autorun insdie a template lifecycle callback) or a template helper are good examples. By placing it in a template helper it will re-run when the data is available.
Template.Whatever.helpers({
items: function() {
// ...do your find here.....
}
});
As items is a reactive context, depending on the collection data, it re-run when that changes, giving you access to the data when the client has them.
3 Retrieving Non Reactive Data
Alternatively it is also possible to retrieve data non-reactively by using Meteor.call with a meteor method, and then doing something with the result, in the callback to the Meteor.call. Depending on what you're doing, Meteor.wrapAsync may also be your friend here.
a simple example (out of my head, untested) :
// on the server
Meteor.methods({
gimmeStuff: function() {
return "here is your stuff kind sir!";
}
});
// on the client
Meteor.call('gimmeStuff', function(err, result) {
if (err || !result) {
console.log("there was an error or no result!");
return false;
}
console.log(result);
return result;
});
4 Its Unlikely that you actually need ithe .fetch()
If you're working with this in a template, you don't need a fetch.
If you want this to be non-reactive you don't need a fetch
As one of the commenters mentioned, a cursor is just a wrapper around that array, giving you convenient methods, and reactivity.
5 Go Back to the Begining
If you haven't already, I would highly recommend working through the tutorial on the meteor site carefully and thoroughly, as it covers all of the essentials you'll need to solve far more challenging problems than this, as well as, by way of example, teach you all of the fundamental mechanics to build great apps with Meteor.

Nodejs + mongodb : How to query $ref fields?

I'am using MongoDB with a nodejs REST service which exposes my data stored inside. I have a question about how to interrogate my data which uses $ref.
Here is a sample of an Object which contains a reference to another object (detail) in anther collection :
{
"_id" : ObjectId("5962c7b53b6a02100a000085"),
"Title" : "test",
"detail" : {
"$ref" : "ObjDetail",
"$id" : ObjectId("5270c7b11f6a02100a000001")
},
"foo" : bar
}
Actually, using Node.js and mongodb module, I do the following :
db.collection("Obj").findOne({"_id" : new ObjectID("5962c7b53b6a02100a000085"},
function(err, item) {
db.collection(item.$ref).findOne({"_id" : item.$id}, function(err,subItem){
...
});
});
In fact I make 2 queries, and get 2 objects. It's a kind of "lazy loading" (not exactly but almost)
My question is simple : is it possible to retrieve the whole object graph in one query ?
Thank you
No, you can't.
To resolve DBRefs, your application must perform additional queries to return the referenced documents. Many drivers have helper methods that form the query for the DBRef automatically. The drivers do not automatically resolve DBRefs into documents.
From the MongoDB docs http://docs.mongodb.org/manual/reference/database-references/.
Is it possible to fetch parent object along with it's $ref using single MongoDB query?
No, it's not possible.
Mongo have no inner support for refs, so it up to your application to populate them (see Brett's answer).
But is it possible to fetch parent object with all its ref's with a single node.js command?
Yes, it's possible. You can do it with Mongoose. It has build-in ref's population support. You'll need to change your data model a little bit to make it work, but it's pretty much what you're looking for. Of course, to do so Mongoose will make the same two MongoDB queries that you did.
Answer of Vladimir is not still valid as the db.dereference method was deleted from MongoDB Nodejs API:
https://www.mongodb.com/blog/post/introducing-nodejs-mongodb-20-driver
The db instance object has been simplified. We've removed the following methods:
db.dereference due to db references being deprecated in the server
No, very few drivers for MongoDb include special support for a DBRef. There are two reasons:
MongoDb doesn't have any special commands to make retrieval of referenced documents possible. So, drivers that do add support are artificially populating the resulting objects.
The more, "bare metal" the API, the less it makes sense. In fact, as. MongoDb collections are schema-less, if the NodeJs driver brought back the primary document with all references realized, if the code then saved the document without breaking the references, it would result in an embedded subdocument. Of course, that would be a mess.
Unless your field values vary, I wouldn't bother with a DBRef type and would instead just store the ObjectId directly. As you can see, a DBRef really offers no benefit except to require lots of duplicate disk space for each reference, as a richer object must stored along with its type information. Either way, you should consider the potentially unnecessary overhead of storing a string containing the referenced collection's documents.
Many developers and MongoDb, Inc. have added an object document mapping layer on top of the existing base drivers. One popular option for MongoDb and Nodejs is Mongoose. As the MongoDb server has no real awareness of referenced documents, the responsibility of the references moves to the client. As it's more common to consistently reference a particular collection from a given document, Mongoose makes it possible to define the reference as a Schema. Mongoose is not schema-less.
If you accept having and using a Schema is useful, then Mongoose is definitely worth looking at. It can efficiently fetch a batch of related documents (from a single collection) from a set of documents. It always is using the native driver, but it generally does operations extremely efficiently and takes some of the drudgery out of more complex application architectures.
I would strongly suggest you have a look at the populate method (here) to see what it's capable of doing.
Demo /* Demo would be a Mongoose Model that you've defined */
.findById(theObjectId)
.populate('detail')
.exec(function (err, doc) {
if (err) return handleError(err);
// do something with the single doc that was returned
})
If instead of findById, which always returns a single document, find were used, with populate, all returned documents' details property will be populated automatically. It's smart too that it would request the same referenced documents multiple times.
If you don't use Mongoose, I'd suggest you consider a caching layer to avoid doing client side reference joins when possible and use the $in query operator to batch as much as possible.
I reach the desired result with next example:
collection.find({}, function (err, cursor) {
cursor.toArray(function (err, docs) {
var count = docs.length - 1;
for (i in docs) {
(function (docs, i) {
db.dereference(docs[i].ref, function(err, doc) {
docs[i].ref = doc;
if (i == count) {
(function (docs) {
console.log(docs);
})(docs);
}
});
})(docs, i)
}
});
});
Not sure that it solution is best of the best, but It is simplest solution that i found.

Categories

Resources