Angular.js accessing and displaying nested models efficiently - javascript

I'm building a site at the moment where there are many relational links between data. As an example, users can make bookings, which will have booker and bookee, along with an array of messages which can be attached to a booking.
An example json would be...
booking = {
id: 1,
location: 'POST CDE',
desc: "Awesome stackoverflow description."
booker: {
id: 1, fname: 'Lawrence', lname: 'Jones',
},
bookee: {
id: 2, fname: 'Stack', lname: 'Overflow',
},
messages: [
{ id: 1, mssg: 'For illustration only' }
]
}
Now my question is, how would you model this data in your angular app? And, while very much related, how would you pull it from the server?
As I can see it I have a few options.
Pull everything from the server at once
Here I would rely on the server to serialize the nested data and just use the given json object. Downsides are that I don't know what users will be involved when requesting a booking or similar object, so I can't cache them and I'll therefore be pulling a large chunk of data every time I request.
Pull the booking with booker/bookee as user ids
For this I would use promises for my data models, and have the server return an object such as...
booking = {
id: 1,
location: 'POST CDE',
desc: "Awesome stackoverflow description."
booker: 1, bookee: 2,
messages: [1]
}
Which I would then pass to a Booking constructor, which would resolve the relevant (booker,bookee and message) ids into data objects via their respective factories.
The disadvantages here are that many ajax requests are used for a single booking request, though it gives me the ability to cache user/message information.
In summary, is it better practise to rely on a single ajax request to collect all the nested information at once, or rely on various requests to 'flesh out' the initial response after the fact.
I'm using Rails 4 if that helps (maybe Rails would be more suited to a single request?)

I'm going to use a system where I can hopefully have the best of both worlds, by creating a base class for all my resources that will be given a custom resolve function, that will know what fields in that particular class may require resolving. A sample resource function would look like this...
class Booking
# other methods...
resolve: ->
booking = this
User
.query(booking.booker, booking.bookee)
.then (users) ->
[booking.booker, booking.bookee] = users
Where it will pass the value of the booker and bookee fields to the User factory, which will have a constructor like so...
class User
# other methods
constructor: (data) ->
user = this
if not isNaN(id = parseInt data, 10)
User.get(data).then (data) ->
angular.extend user, data
else angular.extend this, data
If I have passed the User constructor a value that cannot be parsed into a number (so this will happily take string ids as well as numerical) then it will use the User factorys get function to retrieve the data from the server (or through a caching system, implementation is obviously inside the get function itself). If however the value is detected to be non-NaN, then I'll assume that the User has already been serialized and just extend this with the value.
So it's invisible in how it caches and is independent of how the server returns the nested objects. Allows for modular ajax requests and avoids having to redownload unnecessary data via its caching system.
Once everything is up and running I'll write some tests to see whether the application would be better served with larger, chunked ajax requests or smaller modular ones like above. Either way this lets you pass all model data through your angular factories, so you can rely on every record having inherited any prototype methods you may want to use.

Related

How to do a simple join in GraphQL?

I am very new in GraphQL and trying to do a simple join query. My sample tables look like below:
{
phones: [
{
id: 1,
brand: 'b1',
model: 'Galaxy S9 Plus',
price: 1000,
},
{
id: 2,
brand: 'b2',
model: 'OnePlus 6',
price: 900,
},
],
brands: [
{
id: 'b1',
name: 'Samsung'
},
{
id: 'b2',
name: 'OnePlus'
}
]
}
I would like to have a query to return a phone object with its brand name in it instead of the brand code.
E.g. If queried for the phone with id = 2, it should return:
{id: 2, brand: 'OnePlus', model: 'OnePlus 6', price: 900}
TL;DR
Yes, GraphQL does support a sort of pseudo-join. You can see the books and authors example below running in my demo project.
Example
Consider a simple database design for storing info about books:
create table Book ( id string, name string, pageCount string, authorId string );
create table Author ( id string, firstName string, lastName string );
Because we know that Author can write many Books that database model puts them in separate tables. Here is the GraphQL schema:
type Query {
bookById(id: ID): Book
}
type Book {
id: ID
title: String
pageCount: Int
author: Author
}
type Author {
id: ID
firstName: String
lastName: String
}
Notice there is no authorId on the Book type but a type Author. The database authorId column on the book table is not exposed to the outside world. It is an internal detail.
We can pull back a book and it's author using this GraphQL query:
{
bookById(id:"book-1"){
id
title
pageCount
author {
firstName
lastName
}
}
}
Here is a screenshot of it in action using my demo project:
The result nests the Author details:
{
"data": {
"book1": {
"id": "book-1",
"title": "Harry Potter and the Philosopher's Stone",
"pageCount": 223,
"author": {
"firstName": "Joanne",
"lastName": "Rowling"
}
}
}
}
The single GQL query resulted in two separate fetch-by-id calls into the database. When a single logical query turns into multiple physical queries we can quickly run into the infamous N+1 problem.
The N+1 Problem
In our case above a book can only have one author. If we only query one book by ID we only get a "read amplification" against our database of 2x. Imaging if you can query books with a title that starts with a prefix:
type Query {
booksByTitleStartsWith(titlePrefix: String): [Book]
}
Then we call it asking it to fetch the books with a title starting with "Harry":
{
booksByTitleStartsWith(titlePrefix:"Harry"){
id
title
pageCount
author {
firstName
lastName
}
}
}
In this GQL query we will fetch the books by a database query of title like 'Harry%' to get many books including the authorId of each book. It will then make an individual fetch by ID for every author of every book. This is a total of N+1 queries where the 1 query pulls back N records and we then make N separate fetches to build up the full picture.
The easy fix for that example is to not expose a field author on Book and force the person using your API to fetch all the authors in a separate query authorsByIds so we give them two queries:
type Query {
booksByTitleStartsWith(titlePrefix: String): [Book] /* <- single database call */
authorsByIds(authorIds: [ID]) [Author] /* <- single database call */
}
type Book {
id: ID
title: String
pageCount: Int
}
type Author {
id: ID
firstName: String
lastName: String
}
The key thing to note about that last example is that there is no way in that model to walk from one entity type to another. If the person using your API wants to load the books authors the same time they simple call both queries in single post:
query {
booksByTitleStartsWith(titlePrefix: "Harry") {
id
title
}
authorsByIds(authorIds: ["author-1","author-2","author-3") {
id
firstName
lastName
}
}
Here the person writing the query (perhaps using JavaScript in a web browser) sends a single GraphQL post to the server asking for both booksByTitleStartsWith and authorsByIds to be passed back at once. The server can now make two efficient database calls.
This approach shows that there is "no magic bullet" for how to map the "logical model" to the "physical model" when it comes to performance. This is known as the Object–relational impedance mismatch problem. More on that below.
Is Fetch-By-ID So Bad?
Note that the default behaviour of GraphQL is still very helpful. You can map GraphQL onto anything. You can map it onto internal REST APIs. You can map some types into a relational database and other types into a NoSQL database. These can be in the same schema and the same GraphQL end-point. There is no reason why you cannot have Author stored in Postgres and Book stored in MongoDB. This is because GraphQL doesn't by default "join in the datastore" it will fetch each type independently and build the response in memory to send back to the client. It may be the case that you can use a model that only joins to a small dataset that gets very good cache hits. You can then add caching into your system and not have a problem and benefit from all the advantages of GraphQL.
What About ORM?
There is a project called Join Monster which does look at your database schema, looks at the runtime GraphQL query, and tries to generate efficient database joins on-the-fly. That is a form of Object Relational Mapping which sometimes gets a lot of "OrmHate". This is mainly due to Object–relational impedance mismatch problem.
In my experience, any ORM works if you write the database model to exactly support your object API. In my experience, any ORM tends to fail when you have an existing database model that you try to map with an ORM framework.
IMHO, if the data model is optimised without thinking about ORM or queries then avoid ORM. For example, if the data model is optimised to conserve space in classical third normal form. My recommendation there is to avoid querying the main data model and use the CQRS pattern. See below for an example.
What Is Practical?
If you do want to use pseudo-joins in GraphQL but you hit an N+1 problem you can write code to map specific "field fetches" onto hand-written database queries. Carefully performance test using realist data whenever any fields return an array.
Even when you can put in hand written queries you may hit scenarios where those joins don't run fast enough. In which case consider the CQRS pattern and denormalise some of the data model to allow for fast lookups.
Update: GraphQL Java "Look-Ahead"
In our case we use graphql-java and use pure configuration files to map DataFetchers to database queries. There is a some generic logic that looks at the graph query being run and calls parameterized sql queries that are in a custom configuration file. We saw this article Building efficient data fetchers by looking ahead which explains that you can inspect at runtime the what the person who wrote the query selected to be returned. We can use that to "look-ahead" at what other entities we would be asked to fetch to satisfy the entire query. At which point we can join the data in the database and pull it all back efficiently in the a single database call. The graphql-java engine will still make N in-memory fetches to our code. The N requests to get the author of each book are satisfied by simply lookups in a hashmap that we loaded out of the single database call that joined the author table to the books table returning N complete rows efficiently.
Our approach might sound a little like ORM yet we did not make any attempt to make it intelligent. The developer creating the API and our custom configuration files has to decide which graphql queries will be mapped to what database queries. Our generic logic just "looks-ahead" at what the runtime graphql query actually selects in total to understand all the database columns that it needs to load out of each row returned by the SQL to build the hashmap. Our approach can only handle parent-child-grandchild style trees of data. Yet this is a very common use case for us. The developer making the API still needs to keep a careful eye on performance. They need to adapt both the API and the custom mapping files to avoid poor performance.
GraphQL as a query language on the front-end does not support 'joins' in the classic SQL sense.
Rather, it allows you to pick and choose which fields in a particular model you want to fetch for your component.
To query all phones in your dataset, your query would look like this:
query myComponentQuery {
phone {
id
brand
model
price
}
}
The GraphQL server that your front-end is querying would then have individual field resolvers - telling GraphQL where to fetch id, brand, model etc.
The server-side resolver would look something like this:
Phone: {
id(root, args, context) {
pg.query('Select * from Phones where name = ?', ['blah']).then(d => {/*doStuff*/})
//OR
fetch(context.upstream_url + '/thing/' + args.id).then(d => {/*doStuff*/})
return {/*the result of either of those calls here*/}
},
price(root, args, context) {
return 9001
},
},

What is the best practice when displaying data from more than one table

I have three tables, 'sessions', 'classes' and 'schedules' which they are connected each other.
sessions: id, name, descr
classes: id, session_id, name
schedules: class_id, session_id, date
A class belongs to a session, while the schedules is a N:M relations which gives the opportunity to have particular date for each session within a single class.
My problem comes when I have to display these information, I have a function which displays all Sessions:
$sessions = Session::all();
and I have another function which displays the date of a specific class and a specific session as below:
$result = Schedule:where('class_id','=',$classId)->where('session_id','=',$essionId)->first();
So let say I have 30 sessions for a single class, when it comes to my front-end app which is written in AngularJS I dont know how to handle the displaying here using the ng-repeat iterating thru all sessions and then make another call withing the ng-repeat iteration to call the schedule to display the date of the session, this is not a good practice I guess in AngularJS.
Could anyone tell me what would be the best option for me to handle this problem? Shall I have to modify the back-end? like edit the Session:all(); query to include also the Schedule table? or what is the best way?
I supposed you have already config your relations in models, if not look here
As for me, I use Fractal to customize display data. Also there is convenient method called Available includes
So you can request your data like /sessions/?include=classes and get output
{data: [{
session_id: 1,
some: "data",
classes:[{
class_id: 11,
some: "class_data"
}]
}]}
I would "eager load" the data, so you can access all the object's parents through the object you loaded. This way you can fill your table rows one by one by just iterating over 1 object.
There is excellent documentation about eager loading at the Laravel website, so I suggest you start there

Making code using JavaScript for dependent selects RESTful

Ruby 2.0.0, Rails 4.0.3, Windows 8.1 Update, PostgreSQL 9.3.3
I have code that uses JavaScript to power dependent selects. To do so, it references a controller method that retrieves the data for the following select. I'm told that, because that method is non-standard, this is not RESTful.
I understand that REST is a set of specific constraints regarding client/server communications. I've read some information about it but certainly don't have in-depth knowledge. I am curious about the impact and resolution. So, regarding the question about my configuration and REST: First, would that be accurate that it is not RESTful? Second, how does that impact my application? Third, what should/could I do to resolve that? Providing one example:
The route is: (probably the concern?)
post 'cars/make_list', to: 'cars#make_list'
This is the first select: (OBTW, I use ERB but removed less than/percent)
= f.input(:ymm_year_id, {input_html: {form: 'edit_car', car: #car, value: #car.year}, collection: YmmYear.all.order("year desc").collect { |c| [c.year, c.id] }, prompt: "Year?"})
This is the dependent select:
= render partial: "makes", locals: {form: 'edit_car', car: #car}
This is the partial:
= simple_form_for car,
defaults: {label: false},
remote: true do |f|
makes ||= ""
make = ""
make = car.make_id if car.class == Car and Car.exists?(car.id)
if !makes.blank?
= f.input :ymm_make_id, {input_html: {form: form, car: car, value: make}, collection: makes.collect { |s| [s.make, s.id] }, prompt: "Make?"}
else
= f.input :ymm_make_id, {input_html: {form: form, car: car, value: make}, collection: [], prompt: "Make?"}
end
end
JS:
$(document).ready(function () {
...
// when the #year field changes
$("#car_ymm_year_id").change(function () {
var year = $('select#car_ymm_year_id :selected').val();
var form = $('select#car_ymm_year_id').attr("form");
var car = $('select#car_ymm_year_id').attr("car");
$.post('/cars/make_list/',
{
form: form,
year: year,
car: car
},
function (data) {
$("#car_ymm_make_id").html(data);
});
return false;
});
...
});
And the method:
def make_list
makes = params[:year].blank? ? "" : YmmMake.where(ymm_year_id: params[:year]).order(:make)
render partial: "makes", locals: {car: params[:car], form: params[:form], makes: makes}
end
If I had to describe if, being RESTful means that:
You provide meaningful resources names
You use the HTTP verbs to express your intents
You make proper use of HTTP codes to indicate status
Provide meaningful resource names
As you probably heard it before, everything in REST is about resources. But from the outside, it's just the paths you expose. Your resources are then just a bunch of paths such as:
GET /burgers # a collection of burgers
GET /burger/123 # a burger identified with id 123
GET /burger/123/nutrition_facts # the nutrition facts of burger 123
POST /burgers # with data: {name: "humble jack", ingredients: [...]} to create a new burger
PUT /burger/123 # with data: {name: "chicken king"} to change the name of burger 123
For instance, if you had a path with the url
GET /burger_list?id=123
That would not be considered good practice.
It means you need to think hard about the names you give your resources to make sure the intent is explicit.
Use HTTP verbs to express your intents
It basically means using:
GET to read a resource identified by an identifier (id) or a collection of resources
PUT to update a specific resource that you identify by an identifier (id)
DELETE to destroy a specific resource that you identify by an id
POST to create a new resource
Usually, in Rails, those verbs are, by convention, used to map specific actions in your controller.
GET goes to show or index
PUT goes to update
DELETE goes to destroy
POST goes to create
That's why people usually say that if you have actions in your controllers that don't follow that pattern, you're not "RESTful". But in the end, only the routes you expose count. Not really your controller actions. It is a convention of course, and conventions are useful for readability and maintainability.
You make proper use of HTTP codes to indicate status
You already know the usual suspects:
200 means OK, everything went fine.
404 means NOT FOUND, could not find resource
401 means UNAUTHORIZED, authentication failed, auth token invalid
500 means INTERNAL SERVER ERROR, in other words: kaput
But there are more that you could be using in your responses:
201 means CREATED, it means the resource was successfully created
403 means FORBIDDEN, you don't have the privileges to access that resource
...
You get the picture, it's really about replying with the right HTTP code that represents clearly what happens.
Answering your questions
would that be accurate that it is not RESTful?
From what I see, the first issue is your path.
post 'cars/make_list', to: 'cars#make_list'
What I understand is that you are retrieving a collection of car makes. Using a POST to retrieve a collection is against REST rules, you should be using a GET instead. That should answer your first question.
how does that impact my application?
Well, the impact of not being restful in your case is not very big. It's mainly about readability, clarity and maintainability. Separating concerns and putting them in the right place etc... It's not impacting performance, nor is it a dangerous issue. You're just not RESTful and that makes it more complicated to understand your app, in your case of course.
what should/could I do to resolve that?
Besides the route problem, the other issue is that your action is called make_list and that doesn't follow Rails REST conventions. Rails has a keyword to create RESTful routes:
resources :car_makes, only: [:index] # GET /car_makes , get the list of car makes
This route expresses your intent much better than the previous one and is now a GET request. You can then use query parameters to filter the results. But it means we need to create a new controller to deal with it.
class CarMakesController < ApplicationController
def index
makes = params[:year].blank? ? "" : YmmMake.where(ymm_year_id: params[:year]).order(:make)
render partial: "makes", locals: {car: params[:car], form: params[:form], makes: makes}
end
private
# Strong parameters stuff...
end
And of course we also need to change your jquery to make a GET request instead of a POST.
$(document).ready(function () {
...
// when the #year field changes
$("#car_ymm_year_id").change(function () {
// ...
$.get({
url: '/car_makes',
data: {
form: form,
year: year,
car: car
},
success: function (data) {
$("#car_ymm_make_id").html(data);
});
return false;
});
...
});
This is a much better solution, and it doesn't require too much work.
There is an excellent tutorial on REST on REST API tutorial, if you want to know more about the specifics. I don't know much about the small details, mostly what is useful on a day to day basis.
Hope this helps.

Backbone Sub-Collections & Resources

I'm trying to figure out a Collection/Model system that can handle retrieving
data given the context it's asked from, for example:
Available "root" resources:
/api/accounts
/api/datacenters
/api/networks
/api/servers
/api/volumes
Available "sub" resources:
/api/accounts/:id
/api/accounts/:id/datacenters
/api/accounts/:id/datacenters/:id/networks
/api/accounts/:id/datacenters/:id/networks/:id/servers
/api/accounts/:id/datacenters/:id/networks/:id/servers/:id/volumes
/api/accounts/:id/networks
/api/accounts/:id/networks/:id/servers
/api/accounts/:id/networks/:id/servers/:id/volumes
/api/accounts/:id/servers
/api/accounts/:id/servers/:id/volumes
/api/accounts/:id/volumes
Then, given the Collection/Model system, I would be able to do things like:
// get the first account
var account = AccountCollection.fetch().first()
// get only the datacenters associated to that account
account.get('datacenters')
// get only the servers associated to the first datacenter's first network
account.get('datacenters').first().get('networks').first().get('servers')
Not sure if that makes sense, so let me know if I need to clarify anything.
The biggest kicker as to why I want to be able to do this, is that if the
request being made (ie account.get('datacenters').first().get('networks'))
hasn't be made (the networks of that datacenter aren't loaded on the client)
that it is made then (or can be fetch()d perhaps?)
Any help you can give would be appreciated!
You can pass options to fetch that will be translated to querystring params.
For example:
// get the first account
var account = AccountCollection.fetch({data: {pagesize: 1, sort: "date_desc"}});
Would translate to:
/api/accounts?pagesize=1&sort=date_desc
It is not quite a fluent DSL but it is expressive and efficient since it only transmits the objects requested rather than filtering post fetch.
Edit:
You can lazy load your sub collections and use the same fetch params technique to filter down your list by query string criteria:
var Account = Backbone.Model.extend({
initialize: function() {
this.datacenters = new Datacenters;
this.datacenters.url = "/api/account/" + this.id + '/datacenters';
}
});
Then from an account instance:
account.datacenters.fetch({data: {...}});
Backbone docs on fetching nested models and collections

Backbone-relational: Association key won't work unless it's the same as the foreign key

I'm trying to get the backbone-relational plugin working with an association between tasks and messages. (A task has many messages).
The information is pulled from a standard rails/activerecord site, which has a task_id field as the foreign key.
The problem is, backbone-relational won't populate the 'messages' field with any messages on teh Task model unless I set the key as "task_id" in the reverse relation...but that means that, when accessing the task from the Message model, the task_id field is populated with the actual task object, not the 'task_id' integer, which is overwritten.
I'm guessing there's a simple way to specify task_id as the foreign key with which to determine the parent task, yet have the object that key represents placed in a different field (eg 'task' on the messages object)...but I can't figure out how. Any ideas appreciated. Code below
class Backbonescaffolddemo.Models.Task extends Backbone.RelationalModel
paramRoot: 'task'
relations: [{
type: Backbone.HasMany,
key: "messages",
relatedModel: "Backbonescaffolddemo.Models.Message",
collectionType: "Backbonescaffolddemo.Collections.MessagesCollection",
includeInJSON: true
reverseRelation: {
key: "task_id"
includeInJSON: true
}
}]
You may be able to use keySource or keyDestination to address your particular problem.
Example
In the following example, suppose we are getting data from an old-school relational database, where there is a one-to-many relationship between Monster and Loot_Item. This relationship is expressed by a Monster_Id foreign key in the Loot_Item table. Let us also suppose that our REST service doesn't do any fancy-pants data nesting for us, since that seems to match the situation in your question fairly closely.
keySource
Now, let's set set "keySource" to my foreign key ("Monster_Id") and "key" to the name of the attribute where I want the actual data to go (say, "Monster"). If you break in the debugger, you will see in the attributes object that there is, in fact, a field called "Monster", and that it does point to the monster model data. Hey, cool!
includeInJSON
However, if you toJSON that puppy, guess what? It has put all the monster data in Monster_Id, just like you didn't want! GAH! We can fix that by setting "includeInJSON" to "Monster_Id". Now, when it is converted to JSON, it puts the proper ID back into the Monster_Id field, when it is serializing your data to JSON, to send up to the server.
Problem solved? Er, well, actually, not necessarily...
CAVEAT: This all sounds super-useful, but there's one fairly glaring problem that I have found with this scenario. If you are using a templating engine (such as the one in Underscore.js) that requires you to convert your model to JSON, before passing it into the template, whoops -- you don't have access to your relational data. Alas, the JSON that we want for our messages is not necessarily the same JSON that we want to feed into our templates.
If you want the "task_id" in the message JSON to be the id, not the full JSON for the task, then set the "includeInJSON" to be the Task's ID property ("task_id")
class Backbonescaffolddemo.Models.Task extends Backbone.RelationalModel
paramRoot: 'task'
relations: [{
type: Backbone.HasMany,
key: "messages",
relatedModel: "Backbonescaffolddemo.Models.Message",
collectionType: "Backbonescaffolddemo.Collections.MessagesCollection",
includeInJSON: true
reverseRelation: {
key: "task_id"
includeInJSON: "task_id"
}
}]
The "true" value for includeInJSON says to use the full JSON for the related model.
Edit: After re-reading your question, I'm not sure my answer relates to your issue.
My original answer is for posting a message back to the server where you want the JSON to be something like:
{
"message_title": "My Title",
"message_body": "Blah blah blah...",
"task_id": 12345
}
I'm not sure what exactly you're looking to happen, but the way that Backbone Relational is supposed to work is that the Task's collection of messages will be a collection of the full models, so you can iterate over them and pass them to views for rendering, etc.
If you want to output one of the Message's id's in a template or something, then you'd take the Message model's "id":
myTask.get('messages').first().id -> returns the first message's id

Categories

Resources