I'm trying to implement my first domain driven application, after going through Eric Evan's book on Domain-Driven Design. I'm a bit confused on how to go about this.
In my application, a user can purchase a service for getting them certain number of views on a video they post in Youtube, which is fulfilled by the other users of my app who watch those videos(Basically a replica of the many youtube promoter apps already available, for learning).
Say the service is represented in the app as an entity called WatchTime aggregate. The WatchTime entity contains some information like the id of user who purchased this service, the max number of views purchased, number of views already fulfilled, and points earned by someone who views the video once.
I decided to go with 3 bounded contexts, one for authentication, one for handling the watchtimes, like adding or removing them, and one for managing users and their data. Now the user has his personal info and some points that he collected while using the application.
At first I was thinking that all the user data and related actions be in the 3rd context, like adding more points to a user and or reducing his points, but then while making the model, I realized that that if the watch time purchasing service is going to be in the second one, then its going to have to communicate to the third one every time a WatchTime is purchased to tell a service there to reduce points for that purchase. It wouldn't make sense to keep them in two different ones.
So instead what I'm thinking of is have a model of the user in the 2nd bounded context, but with only points and the WatchTimes that this user purchased, so now it doesnt have to call something on the 3rd context.
My question is how to properly seperate things into contexts? Is it like based on the models, or should it be based on the functionality, and all models related to those functionality are going to be in the same context?
And another thing, how to ensure that all the objects of the same entity have the same value and properly persisted in the database? Should only one object representing a particular entity be present at a time, which will be persisted and disposed by the end of a function? Because I was thinking that if two objects representing the same entity be present at the same time, there's a possibility of both having different values or changing to different values.
If i sound like im rambling, please let me know if I have to be more clear. Thanks.
Bounded contexts basically define areas of functionality where the ubiquitous language (and thus the model) are the same. In different bounded contexts, "user" can mean different things: in a "user profile" context, you might have their email address but in the "viewing time" context, you'd just have the points granted and viewership purchased.
Re "another thing", in general you need to keep an aggregate strongly consistent and only allow an update to succeed if the update is cognizant of every prior update which succeeded, including any updates which succeeded after a read from the datastore. This is the single-writer principle.
There are a couple of ways to accomplish this. First, you can use optimistic concurrency control and store a version number with each aggregate. You then update the aggregate in the DB only if the version hasn't changed; otherwise you attempt the operation (performing all the validations etc.) against the new version of the aggregate. This requires some support in the DB for an atomic check of the version and update (e.g. a transaction).
An alternative approach (my personal preference) is to recognize that a DDD aggregate has a high level of mechanical sympathy to the actor model of computation (e.g. both are units of strong consistency). There are implementations of the actor model (e.g. Microsoft Orleans, Akka Cluster Sharding) which allow an aggregate to be represented by at most one actor at a given time (even if there is a cluster of many servers).
Related
So the gist of my question. Imagine you have a service that handles 2-3-4-10 actions. And to communicate in several components, you have 2-3-4-10 Subjects.
So, is it better to have 1 subject, and pass in on next an object identifying which of the actions it relates to, and filter inside your subscription...or have the lot of them and subscribe separately?
How many subjects is too many? They more or less remain active all at once throughout.
Kind of curious in an abstract a sense as possible, rather than my own usecase and whether or not it could be done better.
I work on large angular applications that use hundreds if not thousands of subjects (Subject, BehaviorSubject, ReplaySubject, AsyncSubject).
Is there a performance hit for using many subjects?
To this, I'd say it's not the subjects that matter, since they are only taking up memory space. It's the pipeline that you attach to them, which places them into the cpu's computation queue that matters. This is dependent on the pipelines themselves, and not the subjects. You could have a single subject that connects to a long computational heavy pipeline, which if done incorrectly, would slow your program down since javascript runs on a single thread of execution (you could use web workers to avoid this problem).
Therefore, the number of subjects is irrelavent here if we are talking about how "performant" your application is. It's the pipelines that determine if your application is slow. Ie, data moving down a pipe and having operators manipulate it.
StackBlitz single pipe that is computationally heavy to prove my point.
Is it better to have 1 subject, and pass in on next an object identifying which of the actions it relates to, and filter inside your subscription?
I would say this is more of a design decision to have a bus of information ("a single subject") passing all your data along, instead of breaking them up into their respective streams. This could be handy if your data is interconnected, meaning your events depend on each other, and if the order they appear within the stream matters (like navigation events: started, performing, ended, etc).
I would be unhappy if a dev used one giant bin to place all their data into instead of breaking it up into respective streams. Ie, if I have a user object, company information, and notifications, I'd expect these to have separation of concerns, and to not be delivered through a bus system (a single subject), but to instead be delivered through different services, with their own respective subjects and pipelines.
How many subjects is too many? They more or less remain active all at once throughout.
If you're doing trivial maps and filtering, then don't worry about how many subjects you're using. Worry about if your streams of data make logical/logistical sense, and that they are structured/siloed correctly.
StackBlitz program combining 1 million behavior subjects to prove my point.
Currently, I have to fill out 2 of the same forms with the same language & part numbers on the Web Based business system "Netsuit" made by Oracle (extremely annoying & waste of time). I need to use a software/code system to read one form entry and duplicate it to the other automatically, just still feeling out the best way to do this and get it to transfer/skim properly.
This is between 2 sister companies, each value(Part) has a different part number linked to them, but internally they cannot be linked due to reporting purposes and which company sales what.
One company starts with 100XXX-XX numbers and the other starts with 300XXX-XX numbers for the parts. Again, they are basically the same Parts.
Not sure if Tampermonkey or java will be able to do this properly as I don't even know where to start.
Any recommendations or walkthough on the best way to do this would be awesome, I know it might be a little hard since its 2 different item systems.
Maybe just pull the description of the items since they will be almost the same?
You can create a user event script on the first company and RESTlet on the second one.
The user event script on the first company will create a JSON object of the item that is being created and pre-process the changes in the part number(or any other changes that is required) for the second company and send it to the second company's RESTlet. The RESTlet will now then create the item for the second company.
By using this approach you don't need any 3rd party application to deal with.
I've a mongodb collection "users" in which i want to add a new field "wallet_amount" when user add money in their wallet. Right now at the time of user registration, i'm trying to insert a document like this
db.users.insert( { email: "test#test.com", wallet_amount: 0 } )
Is this the correct way of doing this or there are chances this will create some security exploits since i'm passing wallet_amount default value as 0?
Or wallet_amount should be inserted only at the time when user add money in wallet.
In theory there are no security implications as to whether you set initial amount on user creation or at a later stage.
However, what you face as a more general security concern is that every time you have any query against the users table, you need to triple check it to make sure there is no way it can alter the wallet_amount incorrectly. Any developer who is coding against this table is touching potentially very sensitive data.
To mitigate against this, if you are dealing with a sensitive field like this:
Actually store the wallet amount in a separate table or database
Have a very limited set of APIs to adjust the wallet amount, test them extensively and only ever use those APIs when working with the wallet amount
This means you decouple the sensitive data from your user table and allow you to isolate the part of your domain which needs extra care and attention.
If you want to take this a step further, consider not storing a wallet amount at all. A common approach for very secure financial systems is to actually store a ledger, which is an immutable record of every transaction. In your case it might look like:
Day 1: I add $100 to my wallet
Day 2: I spend $10
Day 3: I spend $13
etc. You can then actually set up your database so you never mutate any data, only ever add more lines to the ledger. A cache can be used to keep track of the current balances, but this can always be recreated by running over the ledger items. This might be overkill for your scenario, but can provide an extra layer of protection, because you essentially forbid anyone from arbitrarily changing what is in the wallet, they can only add transactions (which makes it easier to spot suspicious behaviour or patterns, and trace where money moves).
Good evening,
my project uses the MEAN Stack and has a few collections and a single database from which the data is retrieved.
Thinking about how the user would interface itself with the webapp I am going to build, I figured that my idea of the application is quite a bit of a waste.
Now, the application is hosted on a private server on the LAN, making it very fast on requests and it's running an express server.
The application is made around employee management, services and places where the services can take place. Just describing, so to have an idea.
The "ring to rule them all" is pretty much the first collection, services, which starts the core of the application. There's a page that let's you add rows, one for each service that you intend to do and within that row you choose an employee to "run the service", based on characteristics that this employee has, meaning that if the service is about teaching Piano, the employee must know how to play Piano. The same logic works for the rest of the "columns" that will build up my row into a full service recognized by the app as such.
Now, what I said above is pretty much information retrieval from a database and logic to make the application model the information retrieved and build something with it.
My question or rather my doubt comes from how I imagined the querying to work for each field that is part of the service row. Right now I'm thinking about querying the database (mongodb) each time I have to pick a value for a field, but if you consider that I might want to add a 100 rows, each of which would have 10 fields, that would make up for a lot of requests to the database. Now, that doesn't seem elegant, nor intelligent to me, but I can't come up with a better solution or idea.
Any suggestions or rule of thumbs for a MEAN newb?
Thanks in advance!
EDIT: Answer to a comment question which was needed.
No, the database is pretty static (unless the user willingly inserts a new value, say a new employee that can do a service). That wouldn't happen very often. Considering the query that would return all the employees for a given service, those employees would (ideally) be inside an associative array, with the possibility to be "pop'd" from it if chosen for a service, making them unavailable for further services (because one person can't do two services at the same time). Hope I was clear, I'm surely not the best person at explaining oneself.
It would query the database on who is available when a user looks at that page and another query if the user assigns an employee to do a service.
In general 1 query on page load and another when data is submitted is standard.
You would only want to use an in memory cache for
frequent queries but most databases will do this automatically.
values that change frequently like:
How many users are connected
Last query sent
Something that happens on almost every query (>95%)
Lately I have been talking with a lot of my mid-tier developers about how to structure the APIs in order to better accommodate the 2-way binding that AngularJS offers. We have been trying to decide whether or not the APIs should be very explicity with their definitions which would work better w/Angular but cause a little more work for the mid-tier or be more implicit and have extra logic in Angular to "massage" the data into a good Angular model.
Lets start with an example. Suppose we are talking about some sort of data backup service. The service allows you to backup data and retain the data for X number of years OR indefinitely. The UI has 2 elements to control this logic. There is a <select> that allows the user to select whether or not they want to delete the data "Never" or "After" X years. If "Never" is selected then we hide the years input, but if "After" is selected, then we show the years input and allow them to input a number between 1-99.
Doing this I have introduced 2 different element controls, each controlling a different property on the $scope model.
However, on the API my mid-tier guy wants to control all of this using a single property called "YearsRetention". If YearsRetention == 0 then that "implicitly" means that we want unlimited retention, but if it is set to anything > 0 then retention is set to that value.
So basically he wants to control the retention settings using this single value, which would force me to write some sort of transformation function in order to set values on the $scope to acheive the same effect in the UI. This transformation would have to happen on both incoming and outgoing data.
In the end, I want to know if the API should be defined implicitly (API sends a single value and Angular will then have to transform the data into usable view model) or explicitly (API sends all values needed to bind directly to the UI and reduces the need to transform the JSON)?
I think there are 2 bad ideas in the designs you describe.
Defining the data structures based on UI convenience. This is a bad idea because you want your API to be clear, multipurpose (supporting different clients with different UIs potentially), and long-lived (API refactoring is operationally expensive). Instead, try to accurately and concisely represent your data in the purest, most accurate, most generalized form, and leave presentation issues such as formatting, truncation, localization, units of measure, page layout, etc to the UI.
Overloading a single data field to express a concept that it doesn't naturally model by way of a "magic value". Assigning extra semantic meaning to the number zero is an example of this, and it's generally regarded as error prone and confusing and a leaky abstraction. Every client will have to encode the magic semantic that zero means forever. Of course, there's the glaring cognitive dissonance that the true meaning of zero would be "not at all". I'd model this as 2 fields, an enumeration called retentionPeriod allowing exactly 2 values: "PERMANENT" and "YEARS" and a separate field perhaps retentionValue to store the integer representing the years. If you end up losing the argument with your back end developer, I'd at least argue that the magic value should be -1 meaning forever instead of 0. (I also think null matches "not at all" more than "forever" which is why I think -1 is the least-bad of the bad magic options. There is some precedent out there for this, at least)
In your specific case I'd argue one of your UI drop-downs would control retentionPeriod and the other would control retentionValue. But my reasoning for this is not because it happens to pair up with your current UI implementation in a straightforward way (that's more of a happy coincidence), it's because it's a clearer representation of the data.
That said, in my experience this specific instance is fairly mild in it's badness. I'd be much more strongly concerned about incorrect choice of array vs object, vague or confusing naming, gigantic data structures, overly chatty APIs, etc.