I have yet to find a relatively good solution for this. Maybe the community can help?
I'm pulling data into my meteor app from some restful end points. One builds on the other. For example..I hit one end point and get a collection of authors. then I need to hit a second endpoint to pull the books each of the authors have written.
Right now I have two separate publish functions on the server side to get the sets of data, however the second one relies on the data from the first. (My initial foray in my app was simply to do it all in one publish, but this felt like not the best architecture)
Is there any way to subscribe to another publish from within another publish server side? Or, some other method of checking that i can do?
So far the internet and stack overflow have yielded little results. I am aware of the publishComposite packages available.. but they seem relatively heavy handed and don't necessarily seem applicable to what I'm trying to do. Any advice would be greatly appreciated
i suggest a divide and conquer strategy. you have basically 2 questions to answer:
for the collections, am i going to do a client-side or server-side join?
what drives calling the remote service to get the new data?
i think you can build these pieces separately and tie them together with the db and Meteor's reactivity.
e.g. you can start by writing the code that hits the remote REST APIs. i think the strategy there is to make the authors call, get the data, then make the books calls. i would do that in one function, tied together with promises. when the book data returns, write it and the authors data to their respective collections (if you don't already have that data), ensuring the foreign keys are intact. now you can tie that function to a button press, and that part is done for now.
next you can move on to the collections and publishing that data. you'll have to decide, as i mentioned, where to do that join. but do the publish(es) in such a way that, per standard Meteor practice, when the collections update in the db, your client gets the updated data.
at this point, you can test everything is storing correctly and updating reactively when you push the button.
the last piece is to decide what drives the API call, to replace the button push. as i'd mentioned in the comments, perhaps a cron job, but maybe there's something else going on in your app that makes it more natural. the danger of putting in the publish, as i think you already know, is that you could get 50 simultaneous subscribes, and you don't want to hit that REST API 50x.
Related
I'm working on a vue app that uses vuex and gets objects from an api. The tables have paging and fetch batches of objects from the api, sometimes including related entities as nested objects. The UI allows some editing via inputs in a table, and adds via modals.
When the user wants to save all changes, I have a problem: how do I know what to patch via the api?
Idea 1: capture every change on every input and mark the object being edited as dirty
Idea 2: make a deep copy of the data after the fetch, and do a deep comparison to find out what's dirty
Idea 3: this is my question: please tell me that idea 3 exists and it's better than 1 or 2!
If the answer isn't idea 3, I'm really hoping it's not idea 1. There are so many inputs to attach change handlers to, and if the user edits something, then re-edits back to its original value, I'll have marked something dirty that really isn't.
The deep copy / deep compare at least isolates the problem to two places in code, but my sense is that there must be a better way. If this is the answer (also hoping not), do I build the deep copy / deep compare myself, or is there a package for it?
It looks like you have the final state on the UI and want to persist it on the server. Instead of sending over the delta - I would just send over the full final state and overwrite whatever there was on server side
So if you have user settings - instead of sending what settings were toggled - just send over the "this is what the new set of settings is"
Heavy stuff needs to be done on the server rather than the client most of the time. So I'll follow the answer given by Asad. You're not supposed to make huge objects diffs, it's 2022 so we need to think about performance.
Of course, it also depends of your app, what this is all about. Maybe your API guy is opposed to it for a specific reason (not only related to performance). Setup a meeting with your team/PO and check what is feasible.
You can always make something on your side too, looping on all inputs should be feasible without manually doing that yourself.
TLDR: this needs to be a discussion in your company with your very specific constrains/limitations. All "reasonable solutions" are already listed and you will probably not be able to go further because those kind of "opinion based" questions are not allowed anyway on SO.
So this question is less of a problem I have and more of a question about how I should go about implementing something.
lets imagine for example I have a User and a Resource, and a User can have multiple Resource but a Resource can have only 1 User. How should you go about creating api endpoints for interacting with this data?
should it be something like
// POST /api/users/resource (to create a resource)
or something like
// POST /api/resource
thats just one example, but there is a lot of questions like that, that come to mind when im thinking about this.
it would be nice if someone who knew what is the right approach (or just a good approach) would give an example on how you would structure api endpoints with relational data like this.
any and all help is appreciated thanks!
I would go with the latter one. The reason for doing that would be the endpoint /api/resource does not bind us to create resources with respect to the user. Down the line in future, we could create resources for Supplier (a hypothetical example). Thus having better flexibility and not needing to change the endpoint for Supplier.
Part of the point of REST is that the server's implementation of a resource is hidden behind the uniform interface. In a sense, you aren't supposed to be able to tell from the resource identifiers whether or not you are dealing with "relational data".
Which is freeing (because you get to design the best possible resource model for your needs); but also leads to analysis-paralysis, because the lack of constraints means that you have many options to choose from.
POST /api/users/resource
POST /api/resource
Both of these are fine. The machines are perfectly happy to carry either message. If you wanted to implement an API that supported both options, that would also be OK.
So how do we choose?
The answer to this really has two parts. The first relates to understanding resources, which are really just generalizations of documents. When we ask for a document on the web, one of the things that can happen is that the document can be cached. If we are sending a message that we expect to modify a document, then we probably want caches to invalidated previously cached versions of the document.
And the primary key used to identified cached documents? The URI.
In the case where we are sending a message to a server to save a new document, and we expect the server to choose its own identifier for its copy of the new document, then one logical choice of request target is the resource that is the index of documents on the server.
This is why you will normally see CreateItem operations implemented as POST handlers on a Collection resource - if the item is successfully added, we want to invalidate previously cached responses to GET /collection.
Do you have to do it that way? No, you do not - it's a "trade off"; you weigh the costs and benefits of the options, and choose one. If you wanted to instead have a separate resource for the CreateItem operation, that's OK too.
The second part of the answer relates to the URI - having decided which document should be handling the requests, what spelling should we use for the identifier of that document.
And, once again, the machines don't care very much. It needs to be RFC 3986 compliant, and you'll save yourself a lot of bother if you choose a spelling that works well with URI Templates, but that still leaves you with a lot of freedom.
The usual answer? Think about the people, who they are, and what they are doing when they are looking at a URI. You've got visitors looking at a browser history, and writers trying to document the API, and operators reading through access logs trying to understand the underlying traffic patterns. Pick a spelling that's going to be helpful to the people you care about.
I am trying to implement the Clean Architecture structure in an app that I am developing and I am having a hard time figuring out exactly what is what.
For example, if I am right, the entities of my appliaction are Employee, Department, EmployeeSkill the entities also include all of the "validation" logic, to ensure that these entities are valid.
And the use-cases are the various actions that I can do with these entities?
For example, use-cases about the Employee:
add-employee.js
remove-employee-by-id.js
update-employee-department.js
update-employee-phone-number.js
...and-more-employee-updates.js
Are these all actually use-cases?
Now the add and remove i dont think have much to discuss about, but what about the updates? Should they be granulated like this?
Also with such architecture, doesnt that mean that, I if I want to update both the employees department and phone number at the same time, I will have to make two separate calls to the database, for something that can be done with one, because the database adapter is being injected into the use case, and every use case starts with "finding" the entity in the database?
Defer thinking about the entities for now. Often, you get stuck trying to abstract the code after your mental model of the world and that is not as helpful as we are lead to believe.
Instead, couple code that change together for one reason into use-cases. A good start can be one for each crud operation in the GUI. what will be by a new method, or new parameter, or new class etc is not part of the CA pattern, that is part of the normal tradeoffs you face when you write code.
I can't see any entities from your example. In my code base ContactCard (I work on an yellow pages in 2021 kind of app) and UserContext (security) are the only entities, these two things are used all over the place.
Other things are just data holders and not really an entity. I have many duplicates of the data holders so that things that are not coupled, stay uncoupled.
Your repository should probably implement the bridge pattern. That means that the business logic defines a bridge, that some repository implements. The use-case is not aware of database tables, so it does not have any granular requirements (think if it as ordering food at mcdonalds, it won't say from the grill I want xxx, and from the fryer I want yyy).
The use case is very demanding in the bridge definition. So much, that many repositories end up having api layers that import and manage the implementation of the bridge, and then they adapters to the internal logic.
This is the difference between api layers in business apps and most B2C apis. An enterprise API for an use case, is the just what the use-case needed.
If you have already constrained yourself by a made up model of the world, and decided to split repos in that, instead of splitting them per use-case, then you end up with poor alignment. Having the same sql query, or parts of it, in more than one repository is not an issue. Over time, the queries look different a lot of the time even if they start out very similar.
I would call your example use-case for UpdatePhoneNumberEverywhere. And then the UpdatePhoneNumberEverywhereRepository implementation, can do what the heck it wants to, that is a detail. The use case does not care.
Another one I might do is UpdatePhoneNumber and the use-case accepts a strategy. Strategy.CASCADE or Strategy.Leaf etc.
In terms of your table design, even though it's a detail, Contacts probably deserves to be broken out.
Every use-case does not start with finding something from the database. Commands and queries are passed in or w/e you call it, and the use-case does something useful.
The most practical way to write a use-case, is to just implement exactly what you need for the business requirement, write all the tests against the public api in the use-case. Just pass in data as a start, a dictionary is often fine.
Entities are often found later when you cannot stand to have so many versions of something, you just need that something to be stable and the same all over, and your end users expects as much too. Then, just refactor.
Is it wrong to make multiple ajax simultaneously requests to different endpoints of a REST API that end up modifying the same resource?
Note: each endpoint will modify different properties.
For example, let's assume that one endpoint modifies some properties for an order, like order_date and amount and another endpoint set's the link between the same order and a customer by changing the customer_id value from the orders table (I know that maybe this is not the best example, all these updates can be done with one endpoint).
Thanks in advance!
This is totally a requirements based question. It is generally a bad idea to have a single resource be changed by multiple processes, but this ONLY matters if there is a consistency relationship between the data. Consider some of the following questions:
If one or more of the AJAX calls fails does will that break your application? If it will, then yes, this is a bad idea. Will your application carry on regardless of what data you have at any given time? If so, then no this doesn't matter.
Take some time to figure out what dependencies you have between your data calls and you will get your answer.
what you are describing is not a shared resource even if it is stored in the same object because you are modifying different properties however take great care when using same object. if your requests to the server depends on the properties that are modified by the other request.
in general its not a good idea to use the same object to store data that is modified by more than one asynchronous function even if the properties are different. it makes your code confusing and harder to maintain since you have to manually coordinate your function calls to prevent race condition.
there are better ways to manage your asynchronous code using Promises or Observables
It's a bad idea in general. But if your code is small and you can manage it then you can do it though its not recommended.
In the long run, it will cause you many problems confusion, maintaining code, consistency etc.
And if in any case another developer has to manage your code, It will be more confusing and tough for him.
In programming always keep things flexible and think in long run. Your requirements can change in future , what will you do then? write the whole program again? This is one thing , you also want to avoid.
I'm using jasmine-node to test my API, and it has worked great for my GET routes. Now, however, I need to test some POSTs and I'm not sure how to go about this without changing my database.
One thought I had was to reset whatever value I change at the end of each spec.
Is this reasonable or is there a better way to go about testing POST requests to my API?
Wrap anything that modifies your database into a transaction. You can have your database changes and then rollback after each test.
usually you are supposed to have a test database, so modify that one is not a big issue. also, a general approach would be not to rely on predefined values on the database (i.e, the GET always request the SAME object..) but try with different objects each time. (using predefined objects may hide problems when the data is slighty different..).
in order to implement the second strategy, you can execute a test with a POST with pseudo-random data to create a new object, and use the returned ID to feed the following GET, UPDATE and finally the DELETE tests.
Just make a duplicate processing page/function and send the data to that for debugging. Comment out anything that makes changes to the database.
Alternatively, pass a variable in your call such as "debug" and have an if/else section in your original function for debugging, ignoring the rest of the function.
Another alternative still is to duplicate your database table and name it debug table. It will have the same structure as your original. Send the test data to it instead and it won't change your original database tables.
I'm pretty sure that you've come up with some solution for your problem already.
BUT, if you don't, the Angular $httpBackend will solve your problem. It is a
Fake HTTP backend implementation suitable for unit testing applications that use the $http service.