So this question is less of a problem I have and more of a question about how I should go about implementing something.
lets imagine for example I have a User and a Resource, and a User can have multiple Resource but a Resource can have only 1 User. How should you go about creating api endpoints for interacting with this data?
should it be something like
// POST /api/users/resource (to create a resource)
or something like
// POST /api/resource
thats just one example, but there is a lot of questions like that, that come to mind when im thinking about this.
it would be nice if someone who knew what is the right approach (or just a good approach) would give an example on how you would structure api endpoints with relational data like this.
any and all help is appreciated thanks!
I would go with the latter one. The reason for doing that would be the endpoint /api/resource does not bind us to create resources with respect to the user. Down the line in future, we could create resources for Supplier (a hypothetical example). Thus having better flexibility and not needing to change the endpoint for Supplier.
Part of the point of REST is that the server's implementation of a resource is hidden behind the uniform interface. In a sense, you aren't supposed to be able to tell from the resource identifiers whether or not you are dealing with "relational data".
Which is freeing (because you get to design the best possible resource model for your needs); but also leads to analysis-paralysis, because the lack of constraints means that you have many options to choose from.
POST /api/users/resource
POST /api/resource
Both of these are fine. The machines are perfectly happy to carry either message. If you wanted to implement an API that supported both options, that would also be OK.
So how do we choose?
The answer to this really has two parts. The first relates to understanding resources, which are really just generalizations of documents. When we ask for a document on the web, one of the things that can happen is that the document can be cached. If we are sending a message that we expect to modify a document, then we probably want caches to invalidated previously cached versions of the document.
And the primary key used to identified cached documents? The URI.
In the case where we are sending a message to a server to save a new document, and we expect the server to choose its own identifier for its copy of the new document, then one logical choice of request target is the resource that is the index of documents on the server.
This is why you will normally see CreateItem operations implemented as POST handlers on a Collection resource - if the item is successfully added, we want to invalidate previously cached responses to GET /collection.
Do you have to do it that way? No, you do not - it's a "trade off"; you weigh the costs and benefits of the options, and choose one. If you wanted to instead have a separate resource for the CreateItem operation, that's OK too.
The second part of the answer relates to the URI - having decided which document should be handling the requests, what spelling should we use for the identifier of that document.
And, once again, the machines don't care very much. It needs to be RFC 3986 compliant, and you'll save yourself a lot of bother if you choose a spelling that works well with URI Templates, but that still leaves you with a lot of freedom.
The usual answer? Think about the people, who they are, and what they are doing when they are looking at a URI. You've got visitors looking at a browser history, and writers trying to document the API, and operators reading through access logs trying to understand the underlying traffic patterns. Pick a spelling that's going to be helpful to the people you care about.
Related
I'm working on a vue app that uses vuex and gets objects from an api. The tables have paging and fetch batches of objects from the api, sometimes including related entities as nested objects. The UI allows some editing via inputs in a table, and adds via modals.
When the user wants to save all changes, I have a problem: how do I know what to patch via the api?
Idea 1: capture every change on every input and mark the object being edited as dirty
Idea 2: make a deep copy of the data after the fetch, and do a deep comparison to find out what's dirty
Idea 3: this is my question: please tell me that idea 3 exists and it's better than 1 or 2!
If the answer isn't idea 3, I'm really hoping it's not idea 1. There are so many inputs to attach change handlers to, and if the user edits something, then re-edits back to its original value, I'll have marked something dirty that really isn't.
The deep copy / deep compare at least isolates the problem to two places in code, but my sense is that there must be a better way. If this is the answer (also hoping not), do I build the deep copy / deep compare myself, or is there a package for it?
It looks like you have the final state on the UI and want to persist it on the server. Instead of sending over the delta - I would just send over the full final state and overwrite whatever there was on server side
So if you have user settings - instead of sending what settings were toggled - just send over the "this is what the new set of settings is"
Heavy stuff needs to be done on the server rather than the client most of the time. So I'll follow the answer given by Asad. You're not supposed to make huge objects diffs, it's 2022 so we need to think about performance.
Of course, it also depends of your app, what this is all about. Maybe your API guy is opposed to it for a specific reason (not only related to performance). Setup a meeting with your team/PO and check what is feasible.
You can always make something on your side too, looping on all inputs should be feasible without manually doing that yourself.
TLDR: this needs to be a discussion in your company with your very specific constrains/limitations. All "reasonable solutions" are already listed and you will probably not be able to go further because those kind of "opinion based" questions are not allowed anyway on SO.
The question may seem voluminous - but in order to understand the essence in a different way, it seems extremely difficult to formulate it.
While reading the documentation and some other sources, I encountered a certain misunderstanding related to the variety of file types for interaction when working with the database:
.dao
.dto
.entity
.repo
Question:
How do these types of files differ conceptually in terms of functionality?
(If anyone has a detailed video or article on this topic, I will also be grateful for the link.)
There is also this microproject (working code) taken from the docks:
https://github.com/Mike-Kharkov/nest-perfect-goods/tree/master/src
What should the code for entering values into the database look like?
(what should be the name of this file, where should it be located from the point of view of the approach, and what specific code should be written there?)
For example, if I need to parse data (as I understand it from another service, it is correct to do this in this framework) and then put it in the database without a request via HTTP, then how to do it most correctly from the point of view of the approach?
P.S. I would be grateful for any constructive advice ..
.dao, .repo, and .entity are all pretty much the same thing. They're the ways that you define to talk with your database. There's a little give and take on the definition of the file, .entity is more of defining the table/entity in the database, but with something like TypeORM the entity becomes a method to talk to the database as well (either through the entity class or the Repository class).
DAO stands for Data Access Object, by the way, and you can read more about it and patterns around it on Wikipedia.
.dto is for Data Transfer Object which is usually the definition of how data is passed between services, or over the wire (incoming request, outgoing response, microservice body, etc). In NestJS we use DTO's for incoming request deserialization and validation, along with outgoing response serialization on occasion.
Is it wrong to make multiple ajax simultaneously requests to different endpoints of a REST API that end up modifying the same resource?
Note: each endpoint will modify different properties.
For example, let's assume that one endpoint modifies some properties for an order, like order_date and amount and another endpoint set's the link between the same order and a customer by changing the customer_id value from the orders table (I know that maybe this is not the best example, all these updates can be done with one endpoint).
Thanks in advance!
This is totally a requirements based question. It is generally a bad idea to have a single resource be changed by multiple processes, but this ONLY matters if there is a consistency relationship between the data. Consider some of the following questions:
If one or more of the AJAX calls fails does will that break your application? If it will, then yes, this is a bad idea. Will your application carry on regardless of what data you have at any given time? If so, then no this doesn't matter.
Take some time to figure out what dependencies you have between your data calls and you will get your answer.
what you are describing is not a shared resource even if it is stored in the same object because you are modifying different properties however take great care when using same object. if your requests to the server depends on the properties that are modified by the other request.
in general its not a good idea to use the same object to store data that is modified by more than one asynchronous function even if the properties are different. it makes your code confusing and harder to maintain since you have to manually coordinate your function calls to prevent race condition.
there are better ways to manage your asynchronous code using Promises or Observables
It's a bad idea in general. But if your code is small and you can manage it then you can do it though its not recommended.
In the long run, it will cause you many problems confusion, maintaining code, consistency etc.
And if in any case another developer has to manage your code, It will be more confusing and tough for him.
In programming always keep things flexible and think in long run. Your requirements can change in future , what will you do then? write the whole program again? This is one thing , you also want to avoid.
I have yet to find a relatively good solution for this. Maybe the community can help?
I'm pulling data into my meteor app from some restful end points. One builds on the other. For example..I hit one end point and get a collection of authors. then I need to hit a second endpoint to pull the books each of the authors have written.
Right now I have two separate publish functions on the server side to get the sets of data, however the second one relies on the data from the first. (My initial foray in my app was simply to do it all in one publish, but this felt like not the best architecture)
Is there any way to subscribe to another publish from within another publish server side? Or, some other method of checking that i can do?
So far the internet and stack overflow have yielded little results. I am aware of the publishComposite packages available.. but they seem relatively heavy handed and don't necessarily seem applicable to what I'm trying to do. Any advice would be greatly appreciated
i suggest a divide and conquer strategy. you have basically 2 questions to answer:
for the collections, am i going to do a client-side or server-side join?
what drives calling the remote service to get the new data?
i think you can build these pieces separately and tie them together with the db and Meteor's reactivity.
e.g. you can start by writing the code that hits the remote REST APIs. i think the strategy there is to make the authors call, get the data, then make the books calls. i would do that in one function, tied together with promises. when the book data returns, write it and the authors data to their respective collections (if you don't already have that data), ensuring the foreign keys are intact. now you can tie that function to a button press, and that part is done for now.
next you can move on to the collections and publishing that data. you'll have to decide, as i mentioned, where to do that join. but do the publish(es) in such a way that, per standard Meteor practice, when the collections update in the db, your client gets the updated data.
at this point, you can test everything is storing correctly and updating reactively when you push the button.
the last piece is to decide what drives the API call, to replace the button push. as i'd mentioned in the comments, perhaps a cron job, but maybe there's something else going on in your app that makes it more natural. the danger of putting in the publish, as i think you already know, is that you could get 50 simultaneous subscribes, and you don't want to hit that REST API 50x.
Just starting to play with breeze.js because of the obvious gains in coding time, i.e. managing to access model data from the server direct within Javascript (I am a newbie here, so obviously bare with!).
In the past I have used the stock ajax calls to get/post data to the server, and I have used a few different client tools in the past to provide some help in querying local data, such as jLinq.
My question is this. Isn't it dangerous to have essentially full model query access in Javascript? I must be missing something because it looks like a really well thought through tool. In the past I have at least controlled what can be sent to the client via the backend query process, and again using something like jLinq etc I could filter the data etc. I can also understand the trade-off perhaps with gaining the direct query/none-duplicating local model problem, so just if anyone could provide some insight to this?
Thanks!
EDIT
Obviously I am not the only one, however I am guessing there is a reasonable response - maybe limiting the data being requested using DTO methods or something? The other question posted is here
It can be dangerous to expose the full business model. It can be dangerous to allow unrestrained querying of even that part of the model that you want to expose to the client. This is true whether you offer an easy-to-query API or one that is difficult to query.
That's why our teams are careful about how we construct our services.
You should only expose types that your client app needs. If you want to limit access to authorized instances of a type, you can write carefully prescribed non-queryable service methods. Breeze can call them just fine. You don't have to use the Breeze query facilities for every request. You'll still benefit from the caching, related-entity-navigation, change-tracking, validation, save-bundling, cache-querying, offline support.
Repeat: your service methods don't have to return IQueryable. Even when they do return IQueryable, you can easily write the service method to constrain the query results to just those entities the user is authorized to see.
Fortunately, you can blend the two approaches in the same service or in collaborating services.
Breeze gives you choices. It's up to you to exercise those choices wisely. Go out there and design your services to fit your requirements.
Breeze isn't meant to be your business logic in that sense. Keeping in mind the rule of thumb that if you do something in Javascript, anyone can do it, you ought to be restricting the visibility of your own service data as needed.
In other words, it's useful for you if you meant to make the data publicly visible anyway. But only expose the entities that you're happy exposing and allowing anyone to query; another way to look at it is that your API becomes a public API for your website (but not one you advertise and tell everyone to use).
I am personally not a fan of doing things this way as there is a dependency created on the schema of the backend implementation. If I want to make changes to my database tables, I now have to take my Javascript into consideration. I also lack in terms of integration and unit testing.
However, it can have its uses if you want to quickly build a website feature on non-sensitive data without having to build the service methods and various layers of implementation of it.
What about when you expose the Metadata? Isn't that considered dangerous. IMHO is not safe to expose metadata from the DbContext. I know you can construct metadata on the client, but the point is to do things as quickly as possible(if safe).