I have a JSON file which is 3000+ lines. What I'd like to do is create a NoSQL database with the same structure (it has embedded documents between 3-5 levels deep). But I want to add information to each level and create a schema for each item, so that I can go back at a later stage and update the information fields, and even have users log-in and change their own values.
I am using JavaScript to write a script that will iterate through the file and upload to MongoDB the schema that I want, based on the information at each level. But I'm struggling to write the code that does this efficiently. At this stage, I'm just wasting too much time trying this and that, and want to move on to the next step of my site.
Below is an example of the file. Basically, it's a bunch of embedded documents, and then at the final level (which will be at a different depth depending on which document it's in), there is an array where each of the fields is a string.
How can I use this data to create a MongoDB database while adding a schema to each item, but keeping the hierarchical nature of the documents? I want all of the documents to have one schema, and then each of the strings at the final depth to have their own, separate schema as well. I can't think of an efficient way to iterate through.
Example from the JSON file:
{
"Applied Sciences": {
"Agriculture": {
"Agricultural Economics": [
"Agricultural Environment And Natural Resources",
"Developmental Economics",
"Food And Consumer Economics",
"Production Economics And Farm Management"
],
"Agronomy": [
"Agroecology",
"Biotechnology",
"Plant Breeding",
"Soil Conservation",
"Soil Science",
"Theoretical Modeling"
],
Here's my schema for all but the strings at the end:
name: String,
completed: Boolean,
category: "Field",
items: {
type: Array
},
description: String,
resources: {
type: Array
}
};
And my rough code which at this stage just iterates through. I'm trying to use the same function call to create the Arrays in the schema, but I'm just not up to that stage yet because I can't even iterate properly through:
function createDatabase(data){
for (field in data){
items = {};
for (field in data){
if (typeof data[field] == "object");
items[field] = createDatabase(data[field]);
};
return items;
}
Related
I am very new in GraphQL and trying to do a simple join query. My sample tables look like below:
{
phones: [
{
id: 1,
brand: 'b1',
model: 'Galaxy S9 Plus',
price: 1000,
},
{
id: 2,
brand: 'b2',
model: 'OnePlus 6',
price: 900,
},
],
brands: [
{
id: 'b1',
name: 'Samsung'
},
{
id: 'b2',
name: 'OnePlus'
}
]
}
I would like to have a query to return a phone object with its brand name in it instead of the brand code.
E.g. If queried for the phone with id = 2, it should return:
{id: 2, brand: 'OnePlus', model: 'OnePlus 6', price: 900}
TL;DR
Yes, GraphQL does support a sort of pseudo-join. You can see the books and authors example below running in my demo project.
Example
Consider a simple database design for storing info about books:
create table Book ( id string, name string, pageCount string, authorId string );
create table Author ( id string, firstName string, lastName string );
Because we know that Author can write many Books that database model puts them in separate tables. Here is the GraphQL schema:
type Query {
bookById(id: ID): Book
}
type Book {
id: ID
title: String
pageCount: Int
author: Author
}
type Author {
id: ID
firstName: String
lastName: String
}
Notice there is no authorId on the Book type but a type Author. The database authorId column on the book table is not exposed to the outside world. It is an internal detail.
We can pull back a book and it's author using this GraphQL query:
{
bookById(id:"book-1"){
id
title
pageCount
author {
firstName
lastName
}
}
}
Here is a screenshot of it in action using my demo project:
The result nests the Author details:
{
"data": {
"book1": {
"id": "book-1",
"title": "Harry Potter and the Philosopher's Stone",
"pageCount": 223,
"author": {
"firstName": "Joanne",
"lastName": "Rowling"
}
}
}
}
The single GQL query resulted in two separate fetch-by-id calls into the database. When a single logical query turns into multiple physical queries we can quickly run into the infamous N+1 problem.
The N+1 Problem
In our case above a book can only have one author. If we only query one book by ID we only get a "read amplification" against our database of 2x. Imaging if you can query books with a title that starts with a prefix:
type Query {
booksByTitleStartsWith(titlePrefix: String): [Book]
}
Then we call it asking it to fetch the books with a title starting with "Harry":
{
booksByTitleStartsWith(titlePrefix:"Harry"){
id
title
pageCount
author {
firstName
lastName
}
}
}
In this GQL query we will fetch the books by a database query of title like 'Harry%' to get many books including the authorId of each book. It will then make an individual fetch by ID for every author of every book. This is a total of N+1 queries where the 1 query pulls back N records and we then make N separate fetches to build up the full picture.
The easy fix for that example is to not expose a field author on Book and force the person using your API to fetch all the authors in a separate query authorsByIds so we give them two queries:
type Query {
booksByTitleStartsWith(titlePrefix: String): [Book] /* <- single database call */
authorsByIds(authorIds: [ID]) [Author] /* <- single database call */
}
type Book {
id: ID
title: String
pageCount: Int
}
type Author {
id: ID
firstName: String
lastName: String
}
The key thing to note about that last example is that there is no way in that model to walk from one entity type to another. If the person using your API wants to load the books authors the same time they simple call both queries in single post:
query {
booksByTitleStartsWith(titlePrefix: "Harry") {
id
title
}
authorsByIds(authorIds: ["author-1","author-2","author-3") {
id
firstName
lastName
}
}
Here the person writing the query (perhaps using JavaScript in a web browser) sends a single GraphQL post to the server asking for both booksByTitleStartsWith and authorsByIds to be passed back at once. The server can now make two efficient database calls.
This approach shows that there is "no magic bullet" for how to map the "logical model" to the "physical model" when it comes to performance. This is known as the Object–relational impedance mismatch problem. More on that below.
Is Fetch-By-ID So Bad?
Note that the default behaviour of GraphQL is still very helpful. You can map GraphQL onto anything. You can map it onto internal REST APIs. You can map some types into a relational database and other types into a NoSQL database. These can be in the same schema and the same GraphQL end-point. There is no reason why you cannot have Author stored in Postgres and Book stored in MongoDB. This is because GraphQL doesn't by default "join in the datastore" it will fetch each type independently and build the response in memory to send back to the client. It may be the case that you can use a model that only joins to a small dataset that gets very good cache hits. You can then add caching into your system and not have a problem and benefit from all the advantages of GraphQL.
What About ORM?
There is a project called Join Monster which does look at your database schema, looks at the runtime GraphQL query, and tries to generate efficient database joins on-the-fly. That is a form of Object Relational Mapping which sometimes gets a lot of "OrmHate". This is mainly due to Object–relational impedance mismatch problem.
In my experience, any ORM works if you write the database model to exactly support your object API. In my experience, any ORM tends to fail when you have an existing database model that you try to map with an ORM framework.
IMHO, if the data model is optimised without thinking about ORM or queries then avoid ORM. For example, if the data model is optimised to conserve space in classical third normal form. My recommendation there is to avoid querying the main data model and use the CQRS pattern. See below for an example.
What Is Practical?
If you do want to use pseudo-joins in GraphQL but you hit an N+1 problem you can write code to map specific "field fetches" onto hand-written database queries. Carefully performance test using realist data whenever any fields return an array.
Even when you can put in hand written queries you may hit scenarios where those joins don't run fast enough. In which case consider the CQRS pattern and denormalise some of the data model to allow for fast lookups.
Update: GraphQL Java "Look-Ahead"
In our case we use graphql-java and use pure configuration files to map DataFetchers to database queries. There is a some generic logic that looks at the graph query being run and calls parameterized sql queries that are in a custom configuration file. We saw this article Building efficient data fetchers by looking ahead which explains that you can inspect at runtime the what the person who wrote the query selected to be returned. We can use that to "look-ahead" at what other entities we would be asked to fetch to satisfy the entire query. At which point we can join the data in the database and pull it all back efficiently in the a single database call. The graphql-java engine will still make N in-memory fetches to our code. The N requests to get the author of each book are satisfied by simply lookups in a hashmap that we loaded out of the single database call that joined the author table to the books table returning N complete rows efficiently.
Our approach might sound a little like ORM yet we did not make any attempt to make it intelligent. The developer creating the API and our custom configuration files has to decide which graphql queries will be mapped to what database queries. Our generic logic just "looks-ahead" at what the runtime graphql query actually selects in total to understand all the database columns that it needs to load out of each row returned by the SQL to build the hashmap. Our approach can only handle parent-child-grandchild style trees of data. Yet this is a very common use case for us. The developer making the API still needs to keep a careful eye on performance. They need to adapt both the API and the custom mapping files to avoid poor performance.
GraphQL as a query language on the front-end does not support 'joins' in the classic SQL sense.
Rather, it allows you to pick and choose which fields in a particular model you want to fetch for your component.
To query all phones in your dataset, your query would look like this:
query myComponentQuery {
phone {
id
brand
model
price
}
}
The GraphQL server that your front-end is querying would then have individual field resolvers - telling GraphQL where to fetch id, brand, model etc.
The server-side resolver would look something like this:
Phone: {
id(root, args, context) {
pg.query('Select * from Phones where name = ?', ['blah']).then(d => {/*doStuff*/})
//OR
fetch(context.upstream_url + '/thing/' + args.id).then(d => {/*doStuff*/})
return {/*the result of either of those calls here*/}
},
price(root, args, context) {
return 9001
},
},
The structure of the table is:
chats
--> randomId
-->--> participants
-->-->--> 0: 'name1'
-->-->--> 1: 'name2'
-->--> chatItems
etc
What I am trying to do is query the chats table to find all the chats that hold a participant by a passed in username string.
Here is what I have so far:
subscribeChats(username: string) {
return this.af.database.list('chats', {
query: {
orderByChild: 'participants',
equalTo: username, // How to check if participants contain username
}
});
}
Your current data structure is great to look up the participants of a specific chat. It is however not a very good structure for looking up the inverse: the chats that a user participates in.
A few problems here:
you're storing a set as an array
you can only index on fixed paths
Set vs array
A chat can have multiple participants, so you modelled this as an array. But this actually is not the ideal data structure. Likely each participant can only be in the chat once. But by using an array, I could have:
participants: ["puf", "puf"]
That is clearly not what you have in mind, but the data structure allows it. You can try to secure this in code and security rules, but it would be easier if you start with a data structure that implicitly matches your model better.
My rule of thumb: if you find yourself writing array.contains(), you should be using a set.
A set is a structure where each child can be present at most once, so it naturally protects against duplicates. In Firebase you'd model a set as:
participants: {
"puf": true
}
The true here is really just a dummy value: the important thing is that we've moved the name to the key. Now if I'd try to join this chat again, it would be a noop:
participants: {
"puf": true
}
And when you'd join:
participants: {
"john": true,
"puf": true
}
This is the most direct representation of your requirement: a collection that can only contain each participant once.
You can only index known properties
With the above structure, you could query for chats that you are in with:
ref.child("chats").orderByChild("participants/john").equalTo(true)
The problem is that this requires you to define an index on `participants/john":
{
"rules": {
"chats": {
"$chatid": {
"participants": {
".indexOn": ["john", "puf"]
}
}
}
}
}
This will work and perform great. But now each time someone new joins the chat app, you'll need to add another index. That's clearly not a scaleable model. We'll need to change our data structure to allow the query you want.
Invert the index - pull categories up, flattening the tree
Second rule of thumb: model your data to reflect what you show in your app.
Since you are looking to show a list of chat rooms for a user, store the chat rooms for each user:
userChatrooms: {
john: {
chatRoom1: true,
chatRoom2: true
},
puf: {
chatRoom1: true,
chatRoom3: true
}
}
Now you can simply determine your list of chat rooms with:
ref.child("userChatrooms").child("john")
And then loop over the keys to get each room.
You'll like have two relevant lists in your app:
the list of chat rooms for a specific user
the list of participants in a specific chat room
In that case you'll also have both lists in the database.
chatroomUsers
chatroom1
user1: true
user2: true
chatroom2
user1: true
user3: true
userChatrooms
user1:
chatroom1: true
chatroom2: true
user2:
chatroom1: true
user2:
chatroom2: true
I've pulled both lists to the top-level of the tree, since Firebase recommends against nesting data.
Having both lists is completely normal in NoSQL solutions. In the example above we'd refer to userChatrooms as the inverted index of chatroomsUsers.
Cloud Firestore
This is one of the cases where Cloud Firestore has better support for this type of query. Its array-contains operator allows filter documents that have a certain value in an array, while arrayRemove allows you to treat an array as a set. For more on this, see Better Arrays in Cloud Firestore.
I have a Node.js app, APP-A, that communicates with another C# app, APP-B, using APP-B's API. APP-B has a RESTful API that returns JSON. Other than a few standard fields e.g., name, description, APP-B's keys are defined when the user creates the field in the system. The resulting JSON looks like this:
{
"name": "An example name",
"description": "Description for the example",
"cust_fields": {
"cust_123": "Joe Bloggs",
"cust_124": "Essex"
}
}
I have two instances of APP-B, a dev and prod environment, which are separate installations. As a result, the JSON from the prod environment is as above, and the JSON from the dev environment looks like this:
{
"name": "An example name",
"description": "Description for the example",
"cust_fields": {
"cust_782": "Joe Bloggs",
"cust_793": "Essex"
}
}
This is dealt with in APP-A (the Node.js app) by having a JSON map like this:
{
"name": "name",
"description": "description",
"cust_fields": {
"full_name": "cust_123",
"city": "cust_124"
}
}
Which is loaded like this:
var map;
switch(env) {
case 'dev':
map = require('../env/dev/map.json');
break;
case 'prod':
map = require('../env/prod/map.json');
break;
};
module.exports = {
name: map.name,
description: map.description,
cust_fields: {
full_name: map.cust_fields.full_name,
city: map.cust_fields.city,
}
}
So I am wondering, is there is a better way of dealing with this? I don't see a way around having to create some kind of manual relationship between the key names across prod and dev, as there is no way to find out what field corresponds to what, but it seems like a lot of work.
Thanks for reading.
Update:
I have created a jsFiddle to better illustrate my question: http://jsfiddle.net/7k9k03o6.
If the mapping is unavoidable and everything is done manually right now, the next best progression would be to automate the building of those lookup maps, through some persistent storage, i.e. a database.
The general flow would be:
When APP-B creates a new form, that field information is stored in the database with all the identifying information. You could store production and dev data in the same db (as a flag) but likely they would just be different databases. Structure might be like customerId, formId, fieldName, fieldMapping, fieldValue, isProduction --> 123, 2, 'cust_124', 'city', 'Essex', true
When APP-A needs a field listing, it queries the DB for the relevant field lists."Find mapping customer X for form Y in production" --> WHERE custId = 123 AND formId = 2 AND isProduction = true would yield a list of fields and their mapping values (which you would post process/reduce into the mapping you need).
This automated process will leave less work for you manually. You shouldn't accidentally miss or forget a mapping from the hand generated file.
This will add a tiny bit of work to the server processing, as you'll need the field mapping from the DB every time a request is processed. (You could back off a bit and do one big query each time a customer is loaded, or further back is each time the server starts . . . depends how dynamic these custom fields are). Plus you would have to map DB results into a usable listing for your purposes.
Depending how many customers and custom forms you are monitoring, an automated process for that will save you a lot of time and avoid a lot of mistakes of all things hand generated.
these files that i will be getting have at least a million rows each, max 1.5 billion. The data is normalized when i get it. I need a way to store it in one document. For the most part i am not 100% how the data will be given to me. it could be csv, Fixed Width Text File or tsv or something else.
currently i have some collections that i imported from some sample csv's.
bellow is a small representation of my data missing fields
in my beneficaries.csv the data is repeated
beneficaries.csv over 6 million records
record # 1
{"userid":"a9dk4kJkj",
"gender":"male",
"dob":20080514,
"start_date":20000101,
"end_date":20080227}
record # 2
{"userid":"a9dk4kJkj",
"gender":"male",
"dob":20080514,
"start_date":20080201,
"end_date":00000000}
same user different start and end dates
claims.csv over 200 million records
{"userid":"a9dk4kJkj",
"date":20080514,
"code":"d4rd3",
"blah":"data"}
lab.csv over 10 million records
{"userid":"a9dk4kJkj",
"date":20080514,
"lab":"mri",
"blah":"data"}
From my limited knowledge i have three option
sort the files, read x amount in our c++ Member objects from the data files, stop at y, insert the members into mongodb, move on to starting at y for x members until we are done. This is Tested and Working but sorting such massive files will kill our machine for hours.
load data to sql, read one by one into c++ Member objects, bulk load the data in mongo. Tested and works but, but i would like to avoid this very much.
load the documents in mongo in seperate collections and perform a map-reduction with out parameter to wrtie to collection. I have the documents loaded (as shown above) in there own collections for each file. Unfortunately i am new to mongo and on a deadline. The map-reduction concept is difficult for me to wrap my head around and implement. I have read the docs and tried using this answer on stack overflow MongoDB: Combine data from multiple collections into one..how?
The output member collection should look like this.
{"userid":"aaa4444",
"gender":"female",
"dob":19901225,
"beneficiaries":[{"start_date":20000101,
"end_date":20080227},
{"start_date":20008101,
"end_date":00000000}],
"claims":[{"date":20080514,
"code":"d4rd3",
"blah":"data"},
{"date":20080514,
"code":"d4rd3",
"blah":"data"}],
"labs":[{"date":20080514,
"lab":"mri",
"blah":"data"}]}
Would the performance of load data to sql, read in c++ and insert into mongodb beat the map reduction? if so i will stick with that method
IMHO, your data are good candidates for map-reduce, hence would be better to go for option 3: load the documents in mongo in 3 seperate collections: beneficiaries, claims, labs and perform map-reduce on the userid key on each collections. Finally, integrate the data from 3 collections into single collections using find and insert on the userid key.
Assume you load beneficiaries.csv into collection beneficiaries, this is the sample code for map-reduce on beneficiaries:
mapBeneficiaries = function() {
var values = {
start_date: this.start_date,
end_date: this.end_date,
userid: this.userid,
gender: this.gender,
dob: this.dob
};
emit(this.userid, values);
};
reduce = function(k, values) {
list = { beneficiaries: [], gender : '', dob: ''};
for(var i in values) {
list.beneficiaries.push({start_date: values[i].start_date, end_date: values[i].end_date});
list.gender = values[i].gender;
list.dob = values[i].dob;
}
return list;
};
db.beneficiaries.mapReduce(mapBeneficiaries, reduce, {"out": {"reduce": "mr_beneficiaries"}});
The output in mr_beneficiaries will be like this:
{
"_id" : "a9dk4kJkj",
"value" : {
"beneficiaries" : [
{
"start_date" : 20080201,
"end_date" : 0
},
{
"start_date" : 20080201,
"end_date" : 0
}
],
"gender" : "male",
"dob" : 20080514
}
}
Do the same thing to obtain mp_claims and mp_labs. Then integrate into singledocuments:
db.mr_beneficiaries.find().forEach(function(doc) {
var id = doc._id;
var claims = db.mr_claims.findOne({"_id":id});
var labs = db.mr_lab.findOne({"_id":id});
db.singledocuments.insert({"userid":id,
"gender":doc.value.gender,
"dob":doc.value.dob,
"beneficiaries":doc.value.beneficiaries,
"claims":claims.value.claims,
"labs":labs.value.labs});
});
I have a blogs collection that contains title, body and agrregate rating that the users have given to them. Another collection 'Ratings' whose schema has reference to the blog, user who rated(if at all he rates them) it in the form of their ObjectIds and the rating they have given ie., +1 or -1.
When a particular user browses through blogs in the 'latest first' order (say 40 of them per page. Call them an array of blogs[0] to blogs[39]) I have to retrieve the rating documents related to this particular user and those 40 blogs if at all the user rated them and notify him of what ratings he has given those blogs.
I tried to extract all rating documents of a particular user in which blog reference objectIds lie between blogs[0]._id and blogs[39]._id which returns empty list in my case. May be objectIds cant be compared using $lt and $gt queries. In that case how should I go about it? Should I redesign my schemas to fit to this scenario?
I am using mongoosejs driver for this case. Here are the relevant parts of the code which differ a bit in execution but youu get the idea.
Schemas:
Client= new mongoose.Schema({
ip:String
})
Rates = new mongoose.Schema({
client:ObjectId,
newsid:ObjectId,
rate:Number
})
News = new mongoose.Schema({
title: String,
body: String,
likes:{type:Number,default:0},
dislikes:{type:Number,default:0},
created:Date,
// tag:String,
client:ObjectId,
tag:String,
ff:{type:Number,default:20}
});
models:
var newsm=mongoose.model('News', News);
var clientm=mongoose.model('Client', Client);
var ratesm=mongoose.model('Rates', Rates);
Logic:
newsm.find({tag:tag[req.params.tag_id]},[],{ sort:{created:-1},limit: buffer+1 },function(err,news){
ratesm.find({client:client._id,newsid:{$lte:news[0]._id,$gte:news.slice(-1)[0]._id}},function(err,ratings){
})
})
Edit:
While implementing the below said schema, I had to do this query in mongoose.js
> db.blogposts.findOne()
{ title : "My First Post", author: "Jane",
comments : [{ by: "Abe", text: "First" },
{ by : "Ada", text : "Good post" } ]
}
> db.blogposts.find( { "comments.by" : "Ada" } )
How do I do this query in mongoose?
A good practice with MongoDB (and other non-relational data stores) is to model your data so it is easy to use/query in your application. In your case, you might consider denormalizing the structure a bit and store the rating right in the blog collection, so a blog might look something like this:
{
title: "My New Post",
body: "Here's my new post. It is great. ...",
likes: 20,
dislikes: 5,
...
rates: [
{ client_id: (id of client), rate: 5 },
{ client_id: (id of another client), rate: 3 },
{ client_id: (id of a third client), rate: 10 }
]
}
The idea being that the objects in the rates array contains all the data you'll need to display the blog entry, complete with ratings, right in the single document. If you also need to query the rates in another way (e.g. find all the ratings made by user X), and the site is read-heavy, you may consider also storing the data in a Rates collection as you're doing now. Sure, the data is in two places, and it's harder to update, but it may be an overall win after you analyze your app and how it accesses your data.
Note that you can apply indexes deep into a document's structure, so for example you can index News.rates.client_id, and then you can quickly find any documents in the News collection that a particular user has rated.