What are "use-cases" in the Clean Architecture? - javascript

I am trying to implement the Clean Architecture structure in an app that I am developing and I am having a hard time figuring out exactly what is what.
For example, if I am right, the entities of my appliaction are Employee, Department, EmployeeSkill the entities also include all of the "validation" logic, to ensure that these entities are valid.
And the use-cases are the various actions that I can do with these entities?
For example, use-cases about the Employee:
add-employee.js
remove-employee-by-id.js
update-employee-department.js
update-employee-phone-number.js
...and-more-employee-updates.js
Are these all actually use-cases?
Now the add and remove i dont think have much to discuss about, but what about the updates? Should they be granulated like this?
Also with such architecture, doesnt that mean that, I if I want to update both the employees department and phone number at the same time, I will have to make two separate calls to the database, for something that can be done with one, because the database adapter is being injected into the use case, and every use case starts with "finding" the entity in the database?

Defer thinking about the entities for now. Often, you get stuck trying to abstract the code after your mental model of the world and that is not as helpful as we are lead to believe.
Instead, couple code that change together for one reason into use-cases. A good start can be one for each crud operation in the GUI. what will be by a new method, or new parameter, or new class etc is not part of the CA pattern, that is part of the normal tradeoffs you face when you write code.
I can't see any entities from your example. In my code base ContactCard (I work on an yellow pages in 2021 kind of app) and UserContext (security) are the only entities, these two things are used all over the place.
Other things are just data holders and not really an entity. I have many duplicates of the data holders so that things that are not coupled, stay uncoupled.
Your repository should probably implement the bridge pattern. That means that the business logic defines a bridge, that some repository implements. The use-case is not aware of database tables, so it does not have any granular requirements (think if it as ordering food at mcdonalds, it won't say from the grill I want xxx, and from the fryer I want yyy).
The use case is very demanding in the bridge definition. So much, that many repositories end up having api layers that import and manage the implementation of the bridge, and then they adapters to the internal logic.
This is the difference between api layers in business apps and most B2C apis. An enterprise API for an use case, is the just what the use-case needed.
If you have already constrained yourself by a made up model of the world, and decided to split repos in that, instead of splitting them per use-case, then you end up with poor alignment. Having the same sql query, or parts of it, in more than one repository is not an issue. Over time, the queries look different a lot of the time even if they start out very similar.
I would call your example use-case for UpdatePhoneNumberEverywhere. And then the UpdatePhoneNumberEverywhereRepository implementation, can do what the heck it wants to, that is a detail. The use case does not care.
Another one I might do is UpdatePhoneNumber and the use-case accepts a strategy. Strategy.CASCADE or Strategy.Leaf etc.
In terms of your table design, even though it's a detail, Contacts probably deserves to be broken out.
Every use-case does not start with finding something from the database. Commands and queries are passed in or w/e you call it, and the use-case does something useful.
The most practical way to write a use-case, is to just implement exactly what you need for the business requirement, write all the tests against the public api in the use-case. Just pass in data as a start, a dictionary is often fine.
Entities are often found later when you cannot stand to have so many versions of something, you just need that something to be stable and the same all over, and your end users expects as much too. Then, just refactor.

Related

What is the advantage of using shared module over rewriting code in each component/module in Angular?

What is the advantage of using shared module over rewriting code in each component/module in Angular?
In my project I've approx 30-40 modules. In all modules in service file same api is written. As per angular standard we should use sharedModule to so that code can be reused. I want to update my Angular project before that wanted to understand what is the advantage of using shared module over re writing code? How will it help to my Angular project?
As per angular standard we should use sharedModule
This isn't per Angular standard. It's per any standard, let alone per development standard.
The phrase exists: "don't reinvent the wheel".
Literally - car needs new tyres? Not going to design whole new ones, you'll grab some more off the shelf and shove them on.
Same applies - 7 places in your app that need to make API requests? Don't design and write 7 whole new ones, use the one you've already made.
Design principal: DRY - Don't Repeat Yourself.
This is especially important with code. You say you have 30-40 modules. Each with their own copy/paste version of some API service.
What happens when authentication is added/removed/modified for that API? Suddenly need to add some token into the header for your requests?
30-40 copy/paste jobs after you've made the change. 30-40... you can't even give us an exact number! How do you know you replaced ALL of them successfully?
Why on Earth would you do that to yourself when you can just keep reusing the one original thing you made?
30-40 modules all use that one API service. One place to make any fixes/changes. One service to test.
Oh lawd the testing - of which I'm nearly 100% certain you have zero tests, and any you do have are likely ineffectual and definitely don't cover nearly as much as you should have covered.
That's 30-40 test classes that you need to update as well (let me guess - copy paste those too?).
And that's just a single mentioned API service. What do you do if you write yourself some kind of helper methods for something in your app?
"Oh, I got fed up of writing these same 5 lines to do x, so I wrote a method to do it for me, it makes it much faster".
Cool - copy paste that another 30-40 times for me into all our other modules so that we can use it too. Thanks.
Put that shizzle into your shared module. One helper class. One class to write tests around. One class to change for additions/fixes. Zero copying and pasting and wasting time and missing things.
Ignoring alllllll of this, how the bejeesus have you managed to go days/weeks/months of repeating yourself over and over and copying/pasting over and over and over and god knows what else over and over and over.... and not once thought "this is a lot of effort, maybe I can save some here by doing something smarter"?!
This isn't even a thought-provoking or discussion-inspiring question. It's a question drawing attention to ones basic common sense and the long-standing human desire to be able to do as much or more with the same or less effort.
Why'd we figure out farming? Because hunting around the whole area for a few berries was more effort.
Why'd we hook animals up to our ploughs? Because it's hard work and we're lazy.
Why'd we replace animals with tractors? Because they can do it better.
Why're we replacing traditional farms with those swanky 'vertical' farm things? Because they're more efficient, can be automated more, etc.
Stop copying and pasting chunks of anything.
The millisecond you do anything for a second time, you refactor that away into a single thing that both can use.
I sincerely hope that you are currently a student and/or just starting out (self taught?). If so, welcome! Keep asking questions, keep hitting Google for your answers (where you'll find better than I can provide), and keep learning. My code was just as bad (worse, likely) back at uni.
If you're not, and are actually a 'seasoned' software developer of some kind, where people are paying you to do this... Please stop, take up farming, and let us all know what you've worked on to date so that we can immediately stop using any of it.

Waiting for one publish to finish before starting another

I have yet to find a relatively good solution for this. Maybe the community can help?
I'm pulling data into my meteor app from some restful end points. One builds on the other. For example..I hit one end point and get a collection of authors. then I need to hit a second endpoint to pull the books each of the authors have written.
Right now I have two separate publish functions on the server side to get the sets of data, however the second one relies on the data from the first. (My initial foray in my app was simply to do it all in one publish, but this felt like not the best architecture)
Is there any way to subscribe to another publish from within another publish server side? Or, some other method of checking that i can do?
So far the internet and stack overflow have yielded little results. I am aware of the publishComposite packages available.. but they seem relatively heavy handed and don't necessarily seem applicable to what I'm trying to do. Any advice would be greatly appreciated
i suggest a divide and conquer strategy. you have basically 2 questions to answer:
for the collections, am i going to do a client-side or server-side join?
what drives calling the remote service to get the new data?
i think you can build these pieces separately and tie them together with the db and Meteor's reactivity.
e.g. you can start by writing the code that hits the remote REST APIs. i think the strategy there is to make the authors call, get the data, then make the books calls. i would do that in one function, tied together with promises. when the book data returns, write it and the authors data to their respective collections (if you don't already have that data), ensuring the foreign keys are intact. now you can tie that function to a button press, and that part is done for now.
next you can move on to the collections and publishing that data. you'll have to decide, as i mentioned, where to do that join. but do the publish(es) in such a way that, per standard Meteor practice, when the collections update in the db, your client gets the updated data.
at this point, you can test everything is storing correctly and updating reactively when you push the button.
the last piece is to decide what drives the API call, to replace the button push. as i'd mentioned in the comments, perhaps a cron job, but maybe there's something else going on in your app that makes it more natural. the danger of putting in the publish, as i think you already know, is that you could get 50 simultaneous subscribes, and you don't want to hit that REST API 50x.

What is the difference between making several simple subscriptions and a single complex one?

Is there any practical difference between keeping several simple (plain) subscriptions and keeping a single complex (many levels) one? (with publish-composite, for example)
Seems to me that there shouldn't be any difference, but I wanted to be sure. I prefer sticking to plain subs as it seems to make the code clearer in highly modular projects, but only if that wouldn't bring any performance or scalability issues.
So, can someone help me?
There are two key differences in doing several plain subscriptions vs. keeping complex composite subscription
1) Exposure/Privacy
A composite subscription allows you to perform joins/filters on the server side to ensure that you only send data that the current user has authority to see. You don't want to expose your entire database to the client. Keep in mind that even if your UI is not showing the data, the user can go into the console and grab all the data that your server publishes.
2) Client performance
Performing joins/filters on the client can be expensive if you have a large dataset. This is of course dependent on your application. Additionally, if the database is constantly being updated, and those updates should not be visible to the user; you will constantly need to transfer the updates to the client without deriving benefits from the network expense.
I think this question can't be given a precise answer without more details specific to your application. That being said, I think it's an important question so I'll outline of some things to consider.
To be clear, the focus of this answer will be debating the relative merits of server-side and client-side reactive joins.
decide if you need reactivity
You can produce a simple join of multiple collections without any reactivity in the publisher (see the first example from the article above). Depending on the nature of the problem, it may be that you don't really need a reactive join. Imagine you are joining comments and authors, but your app always has all of the possible authors published already. In that case the fundamental flaw in non-reactive joins (missing child documents after a new parent) won't exist, so a reactive publication is redundant.
consider your security model
As I mention in my article on template joins, server-side joins have the advantage of bundling all of your data together, whereas client-joins require more granular publishers. Consider the security implications of having a publisher like commentsAndAuthors vs two generic implementations of comments and users. The latter suggests that anyone could request an array of user documents without context.
server joins can be CPU and memory hogs
Look carefully at the implementation of the library you are considering for your server-side joins. Some of them use observe which requires that each complete document in the dependency chain be kept in memory. Others are implemented only on observeChanges which is more efficient but makes packages a bit less flexible in what they can do.
look for observer reuse
One of your goals should be to reuse your observers. In other words, given that you will have S concurrent subscriptions you will only end up doing ~(S-I) work where I is the number of identical observers across clients. Depending on the nature of your subscriptions, you may see greater observer reuse with more granular subscriptions, but this is very application specific.
beware of latency
A big advantage of server-side joins is that they deliver all of the documents effectively at once. Compare that to a client join which must wait for each set of parent documents to arrive before activating the child subscriptions. A N-level client-join would have N round-trips before the initial set of documents will be delivered to the client.
conclusion
You'll need to take all of the above into consideration when deciding which technique to use for each of your publications. The reality is that benchmarking a live app on something like kadira is the only way to arrive at a conclusive answer.

Isn't it dangerous to have query information in javascript using breezejs?

Just starting to play with breeze.js because of the obvious gains in coding time, i.e. managing to access model data from the server direct within Javascript (I am a newbie here, so obviously bare with!).
In the past I have used the stock ajax calls to get/post data to the server, and I have used a few different client tools in the past to provide some help in querying local data, such as jLinq.
My question is this. Isn't it dangerous to have essentially full model query access in Javascript? I must be missing something because it looks like a really well thought through tool. In the past I have at least controlled what can be sent to the client via the backend query process, and again using something like jLinq etc I could filter the data etc. I can also understand the trade-off perhaps with gaining the direct query/none-duplicating local model problem, so just if anyone could provide some insight to this?
Thanks!
EDIT
Obviously I am not the only one, however I am guessing there is a reasonable response - maybe limiting the data being requested using DTO methods or something? The other question posted is here
It can be dangerous to expose the full business model. It can be dangerous to allow unrestrained querying of even that part of the model that you want to expose to the client. This is true whether you offer an easy-to-query API or one that is difficult to query.
That's why our teams are careful about how we construct our services.
You should only expose types that your client app needs. If you want to limit access to authorized instances of a type, you can write carefully prescribed non-queryable service methods. Breeze can call them just fine. You don't have to use the Breeze query facilities for every request. You'll still benefit from the caching, related-entity-navigation, change-tracking, validation, save-bundling, cache-querying, offline support.
Repeat: your service methods don't have to return IQueryable. Even when they do return IQueryable, you can easily write the service method to constrain the query results to just those entities the user is authorized to see.
Fortunately, you can blend the two approaches in the same service or in collaborating services.
Breeze gives you choices. It's up to you to exercise those choices wisely. Go out there and design your services to fit your requirements.
Breeze isn't meant to be your business logic in that sense. Keeping in mind the rule of thumb that if you do something in Javascript, anyone can do it, you ought to be restricting the visibility of your own service data as needed.
In other words, it's useful for you if you meant to make the data publicly visible anyway. But only expose the entities that you're happy exposing and allowing anyone to query; another way to look at it is that your API becomes a public API for your website (but not one you advertise and tell everyone to use).
I am personally not a fan of doing things this way as there is a dependency created on the schema of the backend implementation. If I want to make changes to my database tables, I now have to take my Javascript into consideration. I also lack in terms of integration and unit testing.
However, it can have its uses if you want to quickly build a website feature on non-sensitive data without having to build the service methods and various layers of implementation of it.
What about when you expose the Metadata? Isn't that considered dangerous. IMHO is not safe to expose metadata from the DbContext. I know you can construct metadata on the client, but the point is to do things as quickly as possible(if safe).

mongoose vs mongodb (nodejs modules/extensions), which better? and why?

I've just arrived to Node.js and see that there are many libs to use with the MongoDB, the most popular seem to be these two: (mongoose and mongodb). Can I get pros and cons of those extensions? Are there better alternatives to these two?
Edit: Found a new library that seems also interesting node-mongolian and is "Mongolian DeadBeef is an awesome Mongo DB node.js driver that attempts to closely approximate the mongodb shell." (readme.md)
https://github.com/marcello3d/node-mongolian
This is just to add more resources to new people that view this, so basically Mongolian its like an ODM...
Mongoose is higher level and uses the MongoDB driver (it's a dependency, check the package.json), so you'll be using that either way given those options. The question you should be asking yourself is, "Do I want to use the raw driver, or do I need an object-document modeling tool?" If you're looking for an object modeling (ODM, a counterpart to ORMs from the SQL world) tool to skip some lower level work, you want Mongoose.
If you want a driver, because you intend to break a lot of rules that an ODM might enforce, go with MongoDB. If you want a fast driver, and can live with some missing features, give Mongolian DeadBeef a try: https://github.com/marcello3d/node-mongolian
Mongoose is, by far, the most popular. I use it, and have not used others. So I can't speak about the others, but I can tell you my gripes with Mongoose.
Difficult / poor documentation
Models are used. And they define structure for your documents. Yet this seems odd for Mongo where one of its advantages is that you can throw in a column (err, attribute?) or simply not add one.
Models are case sensitive - Myself and other devs I work with have had issues where the case of the collection name that the model is defined with can cause it to not save anything, w/o error. We have found that using all lowercase names works best. E.g. instead of doing something like mongooseInstace.model('MyCollection', { "_id": Number, "xyz": String }) it's better to do (even though the collection name is really MyCollection): mongooseInstace.model('mycollection', { "_id": Number, "xyz": String })
But honestly, it's really useful. The biggest issue is the documentation. It's there, but it's dry and hard to find what you need. It could use better explanations and more examples. But once you get past these things it works really really well.
I'm building new app and designing now structure of it, here are some thoughts about why to use or not to use mongoose:
Mongoose will be slower (for big apps)
Mongoose is harder with more complicated queries
There will be situations when you want more speed and you will choose to go without mongoose then you will have half queries with mongoose and half w/o. That's crazy situation, had once..
Mongoose will make you code faster with simple apps with simple db structure
Mongoose will make you read mongodb docs AND mongoose docs
With mongoose your stack will get one more thing to depend on and it's one more possibility to crash and burn to ashes.
mongodb driver is raw driver, you communicate directly to mongodb.
mongoose is abstraction layer. You get easier I/O to db while your db structure is simple enough.
Abstraction brings in it's requirements and you have to follow those. Your app will be slower, eat more RAM and be more complicated, but if you know how to use it, you can faster write simple objects, save those to database.
Without mongoose you will have faster application with direct connection to mongodb. No-one says, that you can't write your own models to save stuff to db. You can. And I think it's easier. You write code, which you will use, you know what you need. You abstraction layer will be way smaller, then mongoose's.
I'm coming from PHP world, there we had raw sql with depreciated mysql_ functions, then we got PDO - object orientated abstraction layer to communicate with sql. Or you can choose some heavy ORM like Doctrine to have similar stuff to mongoose on mongoDB. Objects with setter/getters/save method and so on. That's fine, but by adding more abstraction you are adding more files, more logic, more documentation, more dependencies. I like to keep stuff simple and have less dependencies in my stack. BTW, that was why I moved from PHP to server-client Javascript in first place..
With mongoose I think is great to write some simple apps, that have simple db structure similar to sql. When you start having subdocuments and want to make all those crazy queries i found it really hard with mongoose. You have to look at mongodb docs, then look at mongoose docs to find out how to make a query you want. Sometimes you will find, that X future of mongodb is not in mongoose, so you go down to raw mongodb driver and write raw mongodb queries in one or another place. Without mongoose, you look at mongodb docs and do your query.
I have only used mongodb. In my personal opinion, I would recommend starting of with something low level and then moving up. Otherwise you may find yourself using the additional advanced features provided by higher level drivers like mongoose to no actual benefit.
The problem I have had with mongodb, which is endemic to node.js is the poor documentation. There is documentation and a lot of it but it isn't always the most helpful. That I have seen so far there are no good and thorough examples of production usage of the driver. The documentation is filled with the same templated example of open a connection, issue a command and close the connection. You can tell it's copy and pasted from a template because every example includes required for everything that might be needed rather than only what is needed for each example.
To give an example taken entirely at random:
raw {Boolean, default:false}, perform operations using raw bson buffers.
What exactly does "perform operations using raw bson buffers" do? I can't find it explained anywhere and a Google search for that phrase doesn't help. Perhaps I could Google further but I shouldn't have to. The information should be there. Are there any performance, stability, integrity, compatibility, portability or functionally advantages for enabling/disabling this option? I really have no idea without diving deeply into the code and if you're in my boat that's a serious problem. I have a daemon where perfect persistence isn't required but the program needs to be very stable at runtime. I could assume this means that it expects me to deserialize and serialize to JSON or is something low level, internal and transparent to the user but I could be wrong. Although I tend to make good assumptions I cant rely on assumption and guesswork when making vital systems. So here I can either test my assertion with code or dig much deeper into Google or their code. As a one off this isn't so bad but I find my self in this situation many times when reading their documentation. The difference can mean days spent on a task versus hours. I need confirmation and the documentation barely gives me explanation, let alone confirmation.
The documentation is rushed. It doesn't explain events, gives vague details about when errors are thrown or the nature of those errors and there are often several ways to accomplish connectivity which can be unclear. You can get by and its not completely useless, but it is very rough around the edges. You'll find some things are left to guesswork and experimentation.

Categories

Resources