Send only changed properties to Breeze's SaveChanges() method - javascript

When saving changes in Breezejs, since I have entities with lots of fields, I would like to send to the server (to the Breeze SaveChanges() method) not the entire entities, but instead only a subset of the entity, containing just the properties that have changed. I know that this is what the OriginalValuesMap property inside the entities is there for, it's just in order to reduce the network traffic to improve performance (though it may be a very small improvement). On the official Breezejs' website I couldn't find anything about that, nor on the internet. Thanx

We ran into a similar requirement for a different reason. One section of our app interfaces with a 3rd party API that insists on deltas for puts (putting full entities causes server errors or serious performance issues on their end for whatever reason).
We ended up rolling a new data service adapter to address this, and it was a relatively painless process. We extended directly off of the base AbstractDataServiceAdapter, but you may be able to get away with a custom _prepareSaveBundle on top of whichever concrete data service adapter you happen to be using.
You'd just have to register the custom adapter:
ctor = ->
#name = 'custom_ds'
ctor.prototype = new breeze.AbstractDataServiceAdapter() # or whatever your base is
ctor.prototype._prepareSaveBundle = (saveContext, saveBundle) ->
# Do whatever your base implementation does, but use helper.unwrapChangedValues
# instead of helper.unwrapInstance to get at the delta
breeze.config.registerAdapter 'dataService', ctor
And then bind your entity manager to a data service that uses it:
breeze.config.initializeAdapterInstance 'dataService', 'custom_ds'
ds = new breeze.DataService
adapterName: 'custom_ds'
# plus whatever other properties you need to init
manager = new breeze.EntityManager
dataService: ds
# plus whatever other properties you need to init
But if you're just doing this to shrink your payloads, it's probably not worth the hassle and added brittleness for all of the reasons that Jay Traband called out.

We deliberately decided not to do this, because we felt that the performance improvement was not worth the additional complexity. We made this decision based on several considerations.
It would only be useful for modifications and deletions, additions would still need to carry every field.
In most applications, save payloads tend to be much smaller than query payloads.
Standard HTTP compression makes even the largest of these payloads much smaller.
We have been building RIA applications across a range of technologies for a number of years and in our experience optimizing the save payload rarely gives much in the way of overall application performance gains.
But... please feel free to add this request to the Breeze User Voice. As with other, requests if enough of our users think that this is important, then we will do it.

Related

Rest api design with relational data

So this question is less of a problem I have and more of a question about how I should go about implementing something.
lets imagine for example I have a User and a Resource, and a User can have multiple Resource but a Resource can have only 1 User. How should you go about creating api endpoints for interacting with this data?
should it be something like
// POST /api/users/resource (to create a resource)
or something like
// POST /api/resource
thats just one example, but there is a lot of questions like that, that come to mind when im thinking about this.
it would be nice if someone who knew what is the right approach (or just a good approach) would give an example on how you would structure api endpoints with relational data like this.
any and all help is appreciated thanks!
I would go with the latter one. The reason for doing that would be the endpoint /api/resource does not bind us to create resources with respect to the user. Down the line in future, we could create resources for Supplier (a hypothetical example). Thus having better flexibility and not needing to change the endpoint for Supplier.
Part of the point of REST is that the server's implementation of a resource is hidden behind the uniform interface. In a sense, you aren't supposed to be able to tell from the resource identifiers whether or not you are dealing with "relational data".
Which is freeing (because you get to design the best possible resource model for your needs); but also leads to analysis-paralysis, because the lack of constraints means that you have many options to choose from.
POST /api/users/resource
POST /api/resource
Both of these are fine. The machines are perfectly happy to carry either message. If you wanted to implement an API that supported both options, that would also be OK.
So how do we choose?
The answer to this really has two parts. The first relates to understanding resources, which are really just generalizations of documents. When we ask for a document on the web, one of the things that can happen is that the document can be cached. If we are sending a message that we expect to modify a document, then we probably want caches to invalidated previously cached versions of the document.
And the primary key used to identified cached documents? The URI.
In the case where we are sending a message to a server to save a new document, and we expect the server to choose its own identifier for its copy of the new document, then one logical choice of request target is the resource that is the index of documents on the server.
This is why you will normally see CreateItem operations implemented as POST handlers on a Collection resource - if the item is successfully added, we want to invalidate previously cached responses to GET /collection.
Do you have to do it that way? No, you do not - it's a "trade off"; you weigh the costs and benefits of the options, and choose one. If you wanted to instead have a separate resource for the CreateItem operation, that's OK too.
The second part of the answer relates to the URI - having decided which document should be handling the requests, what spelling should we use for the identifier of that document.
And, once again, the machines don't care very much. It needs to be RFC 3986 compliant, and you'll save yourself a lot of bother if you choose a spelling that works well with URI Templates, but that still leaves you with a lot of freedom.
The usual answer? Think about the people, who they are, and what they are doing when they are looking at a URI. You've got visitors looking at a browser history, and writers trying to document the API, and operators reading through access logs trying to understand the underlying traffic patterns. Pick a spelling that's going to be helpful to the people you care about.

What are "use-cases" in the Clean Architecture?

I am trying to implement the Clean Architecture structure in an app that I am developing and I am having a hard time figuring out exactly what is what.
For example, if I am right, the entities of my appliaction are Employee, Department, EmployeeSkill the entities also include all of the "validation" logic, to ensure that these entities are valid.
And the use-cases are the various actions that I can do with these entities?
For example, use-cases about the Employee:
add-employee.js
remove-employee-by-id.js
update-employee-department.js
update-employee-phone-number.js
...and-more-employee-updates.js
Are these all actually use-cases?
Now the add and remove i dont think have much to discuss about, but what about the updates? Should they be granulated like this?
Also with such architecture, doesnt that mean that, I if I want to update both the employees department and phone number at the same time, I will have to make two separate calls to the database, for something that can be done with one, because the database adapter is being injected into the use case, and every use case starts with "finding" the entity in the database?
Defer thinking about the entities for now. Often, you get stuck trying to abstract the code after your mental model of the world and that is not as helpful as we are lead to believe.
Instead, couple code that change together for one reason into use-cases. A good start can be one for each crud operation in the GUI. what will be by a new method, or new parameter, or new class etc is not part of the CA pattern, that is part of the normal tradeoffs you face when you write code.
I can't see any entities from your example. In my code base ContactCard (I work on an yellow pages in 2021 kind of app) and UserContext (security) are the only entities, these two things are used all over the place.
Other things are just data holders and not really an entity. I have many duplicates of the data holders so that things that are not coupled, stay uncoupled.
Your repository should probably implement the bridge pattern. That means that the business logic defines a bridge, that some repository implements. The use-case is not aware of database tables, so it does not have any granular requirements (think if it as ordering food at mcdonalds, it won't say from the grill I want xxx, and from the fryer I want yyy).
The use case is very demanding in the bridge definition. So much, that many repositories end up having api layers that import and manage the implementation of the bridge, and then they adapters to the internal logic.
This is the difference between api layers in business apps and most B2C apis. An enterprise API for an use case, is the just what the use-case needed.
If you have already constrained yourself by a made up model of the world, and decided to split repos in that, instead of splitting them per use-case, then you end up with poor alignment. Having the same sql query, or parts of it, in more than one repository is not an issue. Over time, the queries look different a lot of the time even if they start out very similar.
I would call your example use-case for UpdatePhoneNumberEverywhere. And then the UpdatePhoneNumberEverywhereRepository implementation, can do what the heck it wants to, that is a detail. The use case does not care.
Another one I might do is UpdatePhoneNumber and the use-case accepts a strategy. Strategy.CASCADE or Strategy.Leaf etc.
In terms of your table design, even though it's a detail, Contacts probably deserves to be broken out.
Every use-case does not start with finding something from the database. Commands and queries are passed in or w/e you call it, and the use-case does something useful.
The most practical way to write a use-case, is to just implement exactly what you need for the business requirement, write all the tests against the public api in the use-case. Just pass in data as a start, a dictionary is often fine.
Entities are often found later when you cannot stand to have so many versions of something, you just need that something to be stable and the same all over, and your end users expects as much too. Then, just refactor.

Reflections for REST API

I'm trying to build a (as close as it gets) generic REST API to simplyfy all the standard CRUD rest calls. For example I'd like to write one read method
models.{Entity}.findById(id)
.exec(function(err, entity) {
res(err, entity)
}
{Entity} should be dynamically filled with a param from the rest call e.g.:
GET /api/v1/entity/:id/:type
GET /api/v1/entity/1234567890/user
Obviously I could do a semi-generic solution like this:
if (type === 'user') var query = models.User;
But that not really a nice solution in my opinion.
Questions
Is there an easy way to implement this and would this be viable on a bigger application? Cause everything I know about reflections from other languages their performance isn't that great.
Has anyone other recommendations on how I could implement such a framework?
Solution:
just like Daniel suggested I created a Map
var map = new Map();
map.set('user', models.User);
Reflection performance is all over the map when it comes to different reflection techniques, different languages, different language implementations, underlying hardware/OS platforms. While Java has been notoriously poor, other languages incur negligible overhead. TLDR: I wouldn't rule it out unless you have evidence it will really slow you down.
In this case, being JavaScript, I think you can just do models[name.toLowerCase()], it's the same as models.User and so on, but generic.
If there really was a cost to reflection, you could still handle it generically by memoizing the result, i.e. compute it once generically and cache the resulting class indefinitely.
You are trying to solve a problem that can be avoided. If your server would provide a REST API instead of an HTTP API (know the difference!), you would not need to construct URLs because the server would tell you what you can do next through hypermedia controls in the response. Take a read on HATEOAS and hypermedia if you are interested.
As hypermedia is all about the MIME-type of the responses, a generic client is usually built around a single MIME-type. (To name a few: HAL, UBER, Siren, Collection+JSON).

What is the difference between making several simple subscriptions and a single complex one?

Is there any practical difference between keeping several simple (plain) subscriptions and keeping a single complex (many levels) one? (with publish-composite, for example)
Seems to me that there shouldn't be any difference, but I wanted to be sure. I prefer sticking to plain subs as it seems to make the code clearer in highly modular projects, but only if that wouldn't bring any performance or scalability issues.
So, can someone help me?
There are two key differences in doing several plain subscriptions vs. keeping complex composite subscription
1) Exposure/Privacy
A composite subscription allows you to perform joins/filters on the server side to ensure that you only send data that the current user has authority to see. You don't want to expose your entire database to the client. Keep in mind that even if your UI is not showing the data, the user can go into the console and grab all the data that your server publishes.
2) Client performance
Performing joins/filters on the client can be expensive if you have a large dataset. This is of course dependent on your application. Additionally, if the database is constantly being updated, and those updates should not be visible to the user; you will constantly need to transfer the updates to the client without deriving benefits from the network expense.
I think this question can't be given a precise answer without more details specific to your application. That being said, I think it's an important question so I'll outline of some things to consider.
To be clear, the focus of this answer will be debating the relative merits of server-side and client-side reactive joins.
decide if you need reactivity
You can produce a simple join of multiple collections without any reactivity in the publisher (see the first example from the article above). Depending on the nature of the problem, it may be that you don't really need a reactive join. Imagine you are joining comments and authors, but your app always has all of the possible authors published already. In that case the fundamental flaw in non-reactive joins (missing child documents after a new parent) won't exist, so a reactive publication is redundant.
consider your security model
As I mention in my article on template joins, server-side joins have the advantage of bundling all of your data together, whereas client-joins require more granular publishers. Consider the security implications of having a publisher like commentsAndAuthors vs two generic implementations of comments and users. The latter suggests that anyone could request an array of user documents without context.
server joins can be CPU and memory hogs
Look carefully at the implementation of the library you are considering for your server-side joins. Some of them use observe which requires that each complete document in the dependency chain be kept in memory. Others are implemented only on observeChanges which is more efficient but makes packages a bit less flexible in what they can do.
look for observer reuse
One of your goals should be to reuse your observers. In other words, given that you will have S concurrent subscriptions you will only end up doing ~(S-I) work where I is the number of identical observers across clients. Depending on the nature of your subscriptions, you may see greater observer reuse with more granular subscriptions, but this is very application specific.
beware of latency
A big advantage of server-side joins is that they deliver all of the documents effectively at once. Compare that to a client join which must wait for each set of parent documents to arrive before activating the child subscriptions. A N-level client-join would have N round-trips before the initial set of documents will be delivered to the client.
conclusion
You'll need to take all of the above into consideration when deciding which technique to use for each of your publications. The reality is that benchmarking a live app on something like kadira is the only way to arrive at a conclusive answer.

Isn't it dangerous to have query information in javascript using breezejs?

Just starting to play with breeze.js because of the obvious gains in coding time, i.e. managing to access model data from the server direct within Javascript (I am a newbie here, so obviously bare with!).
In the past I have used the stock ajax calls to get/post data to the server, and I have used a few different client tools in the past to provide some help in querying local data, such as jLinq.
My question is this. Isn't it dangerous to have essentially full model query access in Javascript? I must be missing something because it looks like a really well thought through tool. In the past I have at least controlled what can be sent to the client via the backend query process, and again using something like jLinq etc I could filter the data etc. I can also understand the trade-off perhaps with gaining the direct query/none-duplicating local model problem, so just if anyone could provide some insight to this?
Thanks!
EDIT
Obviously I am not the only one, however I am guessing there is a reasonable response - maybe limiting the data being requested using DTO methods or something? The other question posted is here
It can be dangerous to expose the full business model. It can be dangerous to allow unrestrained querying of even that part of the model that you want to expose to the client. This is true whether you offer an easy-to-query API or one that is difficult to query.
That's why our teams are careful about how we construct our services.
You should only expose types that your client app needs. If you want to limit access to authorized instances of a type, you can write carefully prescribed non-queryable service methods. Breeze can call them just fine. You don't have to use the Breeze query facilities for every request. You'll still benefit from the caching, related-entity-navigation, change-tracking, validation, save-bundling, cache-querying, offline support.
Repeat: your service methods don't have to return IQueryable. Even when they do return IQueryable, you can easily write the service method to constrain the query results to just those entities the user is authorized to see.
Fortunately, you can blend the two approaches in the same service or in collaborating services.
Breeze gives you choices. It's up to you to exercise those choices wisely. Go out there and design your services to fit your requirements.
Breeze isn't meant to be your business logic in that sense. Keeping in mind the rule of thumb that if you do something in Javascript, anyone can do it, you ought to be restricting the visibility of your own service data as needed.
In other words, it's useful for you if you meant to make the data publicly visible anyway. But only expose the entities that you're happy exposing and allowing anyone to query; another way to look at it is that your API becomes a public API for your website (but not one you advertise and tell everyone to use).
I am personally not a fan of doing things this way as there is a dependency created on the schema of the backend implementation. If I want to make changes to my database tables, I now have to take my Javascript into consideration. I also lack in terms of integration and unit testing.
However, it can have its uses if you want to quickly build a website feature on non-sensitive data without having to build the service methods and various layers of implementation of it.
What about when you expose the Metadata? Isn't that considered dangerous. IMHO is not safe to expose metadata from the DbContext. I know you can construct metadata on the client, but the point is to do things as quickly as possible(if safe).

Categories

Resources