Situation
I'm having trouble coming up with a good way to do a certain MongoDb query. First, here's what kind of query I want to do. Assume a simple database which logs entry and exit events (and possibly other actions, doesn't matter) by electronic card swipe. So there's a collection called swipelog with simple documents which look like this:
{
_id: ObjectId("524ab4790a4c0e402200052c")
name: "John Doe",
action: "entry",
timestamp: ISODate("2013-10-01T1:32:12.112Z")
}
Now I want to list names and their last entry times (and any other fields I may want, but example below uses just these two fields).
Current solution
Here is what I have now, as a "one-liner" for MongoDb JavaScript console:
db.swipelog.distinct('name')
.forEach( function(name) {
db.swipelog.find( { name: name, action:"entry" } )
.sort( { $natural:-1 } )
.limit(1)
.forEach( function(entry) {
printjson( [ entry.name, entry.timestamp ] )
})
})
Which prints something like:
[ "John Doe", ISODate("2013-10-01T1:32:12.112Z")]
[ "Jane Deo", ISODate("2013-10-01T1:36:12.112Z")]
...
Question
I think above has the obvious scaling problem. If there are a hundred names, then 1+100 queries will be made to the database. So what is a good/correct way to get "last timestamp of every distinct name" ? Changing database structure or adding some collections is ok, if it makes this easier.
You can use aggregation framework to achieve this:
db.collection.aggregate(
[
{$match:
{action:'entry'}
},
{$group:
{_id:'$name',
first:
{$max:'$timestamp'}
}
}
])
If you likely to include other fields in the results, you can use the $first operator
db.collection.aggregate(
[
{$match:
{action:'entry'}
},
{$sort:
{name:1, timestamp:-1}
},
{$group:
{_id:'$name',
timestamp: {$first:'$timestamp'},
otherField: {$first:'$otherField'},
}
}
])
This answer should be a comment on attish's answer above, but I don't have sufficient rep here to comment
Keep in mind that the aggregation framework cannot return more than 16MB of data. If you have a very large number of users, you may run into this limitation on your production system.
MongoDB 2.6 adds new features to the aggregation framework to deal with this:
db.collection.aggregateCursor() (temporary name) is identical to db.collection.aggregate() except that it returns a cursor instead of a document. This avoids the 16MB limitation
$out is a new pipeline phase that directs the pipeline's output to a collection. This allows you to run aggregation jobs
$sort has been improved to remove its RAM limitations and increase speed
If query performance is more important than data age, you could schedule a regular aggregate command that stores its results in collection like db.last_swipe, then have your application simply query db.last_swipe for the relevant user.
Conclusion: I agree that attish has the right approach. However, you may run into trouble scaling it on the current MongoDB release and should look into Mongo 2.6.
Related
I have a JSON file which is 3000+ lines. What I'd like to do is create a NoSQL database with the same structure (it has embedded documents between 3-5 levels deep). But I want to add information to each level and create a schema for each item, so that I can go back at a later stage and update the information fields, and even have users log-in and change their own values.
I am using JavaScript to write a script that will iterate through the file and upload to MongoDB the schema that I want, based on the information at each level. But I'm struggling to write the code that does this efficiently. At this stage, I'm just wasting too much time trying this and that, and want to move on to the next step of my site.
Below is an example of the file. Basically, it's a bunch of embedded documents, and then at the final level (which will be at a different depth depending on which document it's in), there is an array where each of the fields is a string.
How can I use this data to create a MongoDB database while adding a schema to each item, but keeping the hierarchical nature of the documents? I want all of the documents to have one schema, and then each of the strings at the final depth to have their own, separate schema as well. I can't think of an efficient way to iterate through.
Example from the JSON file:
{
"Applied Sciences": {
"Agriculture": {
"Agricultural Economics": [
"Agricultural Environment And Natural Resources",
"Developmental Economics",
"Food And Consumer Economics",
"Production Economics And Farm Management"
],
"Agronomy": [
"Agroecology",
"Biotechnology",
"Plant Breeding",
"Soil Conservation",
"Soil Science",
"Theoretical Modeling"
],
Here's my schema for all but the strings at the end:
name: String,
completed: Boolean,
category: "Field",
items: {
type: Array
},
description: String,
resources: {
type: Array
}
};
And my rough code which at this stage just iterates through. I'm trying to use the same function call to create the Arrays in the schema, but I'm just not up to that stage yet because I can't even iterate properly through:
function createDatabase(data){
for (field in data){
items = {};
for (field in data){
if (typeof data[field] == "object");
items[field] = createDatabase(data[field]);
};
return items;
}
I am very new in GraphQL and trying to do a simple join query. My sample tables look like below:
{
phones: [
{
id: 1,
brand: 'b1',
model: 'Galaxy S9 Plus',
price: 1000,
},
{
id: 2,
brand: 'b2',
model: 'OnePlus 6',
price: 900,
},
],
brands: [
{
id: 'b1',
name: 'Samsung'
},
{
id: 'b2',
name: 'OnePlus'
}
]
}
I would like to have a query to return a phone object with its brand name in it instead of the brand code.
E.g. If queried for the phone with id = 2, it should return:
{id: 2, brand: 'OnePlus', model: 'OnePlus 6', price: 900}
TL;DR
Yes, GraphQL does support a sort of pseudo-join. You can see the books and authors example below running in my demo project.
Example
Consider a simple database design for storing info about books:
create table Book ( id string, name string, pageCount string, authorId string );
create table Author ( id string, firstName string, lastName string );
Because we know that Author can write many Books that database model puts them in separate tables. Here is the GraphQL schema:
type Query {
bookById(id: ID): Book
}
type Book {
id: ID
title: String
pageCount: Int
author: Author
}
type Author {
id: ID
firstName: String
lastName: String
}
Notice there is no authorId on the Book type but a type Author. The database authorId column on the book table is not exposed to the outside world. It is an internal detail.
We can pull back a book and it's author using this GraphQL query:
{
bookById(id:"book-1"){
id
title
pageCount
author {
firstName
lastName
}
}
}
Here is a screenshot of it in action using my demo project:
The result nests the Author details:
{
"data": {
"book1": {
"id": "book-1",
"title": "Harry Potter and the Philosopher's Stone",
"pageCount": 223,
"author": {
"firstName": "Joanne",
"lastName": "Rowling"
}
}
}
}
The single GQL query resulted in two separate fetch-by-id calls into the database. When a single logical query turns into multiple physical queries we can quickly run into the infamous N+1 problem.
The N+1 Problem
In our case above a book can only have one author. If we only query one book by ID we only get a "read amplification" against our database of 2x. Imaging if you can query books with a title that starts with a prefix:
type Query {
booksByTitleStartsWith(titlePrefix: String): [Book]
}
Then we call it asking it to fetch the books with a title starting with "Harry":
{
booksByTitleStartsWith(titlePrefix:"Harry"){
id
title
pageCount
author {
firstName
lastName
}
}
}
In this GQL query we will fetch the books by a database query of title like 'Harry%' to get many books including the authorId of each book. It will then make an individual fetch by ID for every author of every book. This is a total of N+1 queries where the 1 query pulls back N records and we then make N separate fetches to build up the full picture.
The easy fix for that example is to not expose a field author on Book and force the person using your API to fetch all the authors in a separate query authorsByIds so we give them two queries:
type Query {
booksByTitleStartsWith(titlePrefix: String): [Book] /* <- single database call */
authorsByIds(authorIds: [ID]) [Author] /* <- single database call */
}
type Book {
id: ID
title: String
pageCount: Int
}
type Author {
id: ID
firstName: String
lastName: String
}
The key thing to note about that last example is that there is no way in that model to walk from one entity type to another. If the person using your API wants to load the books authors the same time they simple call both queries in single post:
query {
booksByTitleStartsWith(titlePrefix: "Harry") {
id
title
}
authorsByIds(authorIds: ["author-1","author-2","author-3") {
id
firstName
lastName
}
}
Here the person writing the query (perhaps using JavaScript in a web browser) sends a single GraphQL post to the server asking for both booksByTitleStartsWith and authorsByIds to be passed back at once. The server can now make two efficient database calls.
This approach shows that there is "no magic bullet" for how to map the "logical model" to the "physical model" when it comes to performance. This is known as the Object–relational impedance mismatch problem. More on that below.
Is Fetch-By-ID So Bad?
Note that the default behaviour of GraphQL is still very helpful. You can map GraphQL onto anything. You can map it onto internal REST APIs. You can map some types into a relational database and other types into a NoSQL database. These can be in the same schema and the same GraphQL end-point. There is no reason why you cannot have Author stored in Postgres and Book stored in MongoDB. This is because GraphQL doesn't by default "join in the datastore" it will fetch each type independently and build the response in memory to send back to the client. It may be the case that you can use a model that only joins to a small dataset that gets very good cache hits. You can then add caching into your system and not have a problem and benefit from all the advantages of GraphQL.
What About ORM?
There is a project called Join Monster which does look at your database schema, looks at the runtime GraphQL query, and tries to generate efficient database joins on-the-fly. That is a form of Object Relational Mapping which sometimes gets a lot of "OrmHate". This is mainly due to Object–relational impedance mismatch problem.
In my experience, any ORM works if you write the database model to exactly support your object API. In my experience, any ORM tends to fail when you have an existing database model that you try to map with an ORM framework.
IMHO, if the data model is optimised without thinking about ORM or queries then avoid ORM. For example, if the data model is optimised to conserve space in classical third normal form. My recommendation there is to avoid querying the main data model and use the CQRS pattern. See below for an example.
What Is Practical?
If you do want to use pseudo-joins in GraphQL but you hit an N+1 problem you can write code to map specific "field fetches" onto hand-written database queries. Carefully performance test using realist data whenever any fields return an array.
Even when you can put in hand written queries you may hit scenarios where those joins don't run fast enough. In which case consider the CQRS pattern and denormalise some of the data model to allow for fast lookups.
Update: GraphQL Java "Look-Ahead"
In our case we use graphql-java and use pure configuration files to map DataFetchers to database queries. There is a some generic logic that looks at the graph query being run and calls parameterized sql queries that are in a custom configuration file. We saw this article Building efficient data fetchers by looking ahead which explains that you can inspect at runtime the what the person who wrote the query selected to be returned. We can use that to "look-ahead" at what other entities we would be asked to fetch to satisfy the entire query. At which point we can join the data in the database and pull it all back efficiently in the a single database call. The graphql-java engine will still make N in-memory fetches to our code. The N requests to get the author of each book are satisfied by simply lookups in a hashmap that we loaded out of the single database call that joined the author table to the books table returning N complete rows efficiently.
Our approach might sound a little like ORM yet we did not make any attempt to make it intelligent. The developer creating the API and our custom configuration files has to decide which graphql queries will be mapped to what database queries. Our generic logic just "looks-ahead" at what the runtime graphql query actually selects in total to understand all the database columns that it needs to load out of each row returned by the SQL to build the hashmap. Our approach can only handle parent-child-grandchild style trees of data. Yet this is a very common use case for us. The developer making the API still needs to keep a careful eye on performance. They need to adapt both the API and the custom mapping files to avoid poor performance.
GraphQL as a query language on the front-end does not support 'joins' in the classic SQL sense.
Rather, it allows you to pick and choose which fields in a particular model you want to fetch for your component.
To query all phones in your dataset, your query would look like this:
query myComponentQuery {
phone {
id
brand
model
price
}
}
The GraphQL server that your front-end is querying would then have individual field resolvers - telling GraphQL where to fetch id, brand, model etc.
The server-side resolver would look something like this:
Phone: {
id(root, args, context) {
pg.query('Select * from Phones where name = ?', ['blah']).then(d => {/*doStuff*/})
//OR
fetch(context.upstream_url + '/thing/' + args.id).then(d => {/*doStuff*/})
return {/*the result of either of those calls here*/}
},
price(root, args, context) {
return 9001
},
},
I'm trying to track the bandwidth usage of a user based upon two mongoose schemas. I have a user and image schema, were a user has many images. My image schema looks like this:
image = {
creator: 'ObjectId of user',
size: '12345', //kb
uploadedTo:[{}]
}
Essentially I want to create a query that will get all images that belong to a user via the image.creator property. I would then multiply the image.size property by image.uploadedTo.length value to get the total bandwidth used.
For example: If a user has 5 images, each image is 5,000kb and is uploaded to 3 services each, the total bandwidth for the user would be 75,000kb (5*5,000*3).
Is this query possible strictly through mongoose, or would I have to just get the user's images and then use regular javascript to get the total bandwidth?
You'll want to use the aggregation pipeline. The basic projection might look like this:
{
$project: {
size: 1,
number_of_uploads: {
$size: "$uploadedTo"
},
total_bandwidth: {
$multiply: [ "$size", "$number_of_uploads" ]
}
}
You'd get a new document that looks like:
{
size: '1234',
number_of_uploads: 2,
total_bandwidth: 2468
}
You'll need to integrate that with Mongoose's aggregate helper.
If you're using MongoDB 3.2, you can also use $lookup (which is basically a join operation) as part of your pipeline to look up the creator._id, and then run a $sum operation on all of the images (you'll probably $group by that creator ID). The benefit of this is that your server doesn't do any work; the lookups and operations happen inside MongoDB itself.
If you're not using v3.2, you can leverage Mongoose's population to look up (on your own server) the creator ID for you, and then use JavaScript on your own server to calculate the sum.
It's a bit difficult for me to come up with what exactly your pipeline will look like since I don't have a sample dataset to play with, but the above tools should be all that you need.
Additional operation resources
$size
$multiply
(P.S. you're probably looking at this like "WTF?". Sometimes it's easier to just do the calculations yourself and "use regular javascript to get the total bandwidth", as you mentioned. Both solutions will work, it just depends on where you want to put the load - whether on the MongoDB server or on your server - and how many round-trips you want to make.)
I am working to solve a problem not dissimilar to the discussion present at the following blog post. This is wishing to publish two related data sets in Meteor, with a 'reactive join' on the server side.
https://www.discovermeteor.com/blog/reactive-joins-in-meteor/
Unfortunately for me, however, the related collection I wish to join to, will not be joined using the "_id" field, but using another field. Normally in mongo and meteor I would create a 'filter' block where I could specify this query. However, as far as I can tell in the PWR package, there is an implicit assumption to join on '_id'.
If you review the example given on the 'publish-with-relations' github page (see below) you can see that both posts and comments are being joined to the Meteor.users '_id' field. But what if we needed to join to the Meteor.users 'address' field ?
https://github.com/svasva/meteor-publish-with-relations
In the short term I have specified my query 'upside down' (as luckily I m able to use the _id field when doing a reverse join), but I suspect this will result in an inefficient query as the datasets grow, so would rather be able to do a join in the direction planned.
The two collections we are joining can be thought of as like a conversation topic/header record, and a conversation message collection (i.e. one entry in the collection for each message in the conversation).
The conversation topic in my solution is using the _id field to join, the conversation messages have a "conversationKey" field to join with.
The following call works, but this is querying from the messages to the conversation, instead of vice versa, which would be more natural.
Meteor.publishWithRelations({
handle: this,
collection: conversationMessages,
filter: { "conversationKey" : requestedKey },
options : {sort: {msgTime: -1}},
mappings: [{
//reverse: true,
key: 'conversationKey',
collection: conversationTopics,
filter: { startTime: { $gt : (new Date().getTime() - aLongTimeAgo ) } },
options: {
sort: { createdAt: -1 }
},
}]
});
Can you do a join without an _id?
No, not with PWR. Joining with a foreign key which is the id in another table/collection is nearly always how relational data is queried. PWR is making that assumption to reduce the complexity of an already tricky implementation.
How can this publish be improved?
You don't actually need a reactive join here because one query does not depend on the result of another. It would if each conversation topic held an array of conversation message ids. Because both collections can be queried independently, you can return an array of cursors instead:
Meteor.publish('conversations', function(requestedKey) {
check(requestedKey, String);
var aLongTimeAgo = 864000000;
var filter = {startTime: {$gt: new Date().getTime() - aLongTimeAgo}};
return [
conversationMessages.find({conversationKey: requestedKey}),
conversationTopics.find(requestedKey, {filter: filter})
];
});
Notes
Sorting in your publish function isn't useful unless you are using a limit.
Be sure to use a forked version of PWR like this one which includes Tom's memory leak fix.
Instead of conversationKey I would call it conversationTopicId to be more clear.
I think this could be now much easier solved with the reactive-publish package (I am one of authors). You can make any query now inside an autorun and then use the results of that to publish the query you want to push to the client. I would write you an example code, but I do not really understand what exactly do you need. For example, you mention you would like to limit topics, but you do not explain why would they be limited if you are providing requestedKey which is an ID of a document anyway? So only one result is available?
If I have two objects in a user collection:
{_id: 1, name: 'foo', workItems: []}
{_id: 2, name: 'bar', workItems: []}
how would I add links to objects in a workItem collection into the workItems array for each user?
I understand direct embedding but some workItems will be assigned to multiple users so I don't want to duplicate data. I have looked on mongodb.org but I can't find any examples of linking.
Sometimes it is just better to duplicate the data. MongoDB is a non relational Database. Some ways of doing stuffs are bad practices with relational databases but intended with non relational one. This really is not the same way of thinking even though there are obvious common points.
At my work, we use it in production and found it both easier and faster for read operations to duplicate the data. This is precisely where the power of MongoDB stands.
Of course, when a workitem is modified, this requires your application to update all the places where it appears... This may not be a good solution for systems that are write intensive.
Another point is that joints are not handled by the engine so that you will have to issue at least a second request. You will then have to do the joint manually on the application side. Either way, you will have to move logic from the database to the client application.
You can do a DBRef like this:
{ $ref : <name of collection where reference is>, $id : <_id of document>, $db : <optional argument for specifying the databse the document is at> }
So your document would look like this:
{_id: 1, name: 'foo', workItems: {$ref: "blarg", $id: "1"}}