I'm using VueJS and MongoDB to create a virtual pet.
We're saving the user data using localStorage.
I'm wondering what the mechanism would be, to make the virtual pet evolve (ie. life gauge going down) when the user is not on the web app.
Would I need to save the date when the user leaves the app ?
Yes, you should save the time when the user leaves the app.
When they return (so, whenever you fetch data from the database), compare the saved time to the current time and apply whatever operations need to happen based on the difference.
Alternatively, you could have a server always running and deal with scheduled jobs and the like to keep it all updated in realtime, but lazy evaluation that only happens when the user requests the data should suffice for this case.
Related
After logging into an app (React.js), I am caching the member data in localStorage as a lot of my components are using it and request only needs to be done upon log-in, ideally.
However, a few properties in this member object may be changed in the backend manually so the frontend doesn't have a way to know whether the member object has changed at all. (Again, ideally, any change to the member object should go through some form submission that directly changes the DB, with which an update can be triggered for the localStorage, but this is not an option at this time.)
Example scenario: There's a generic form in the app to request for additional credits. Customer service will receive an email regarding the request. Credits would be manually updated for Customer A (in DB). If Customer A doesn't re-login (where the get request for member is done), localStorage will still show the old no. of credits.
If this is the situation, what's the best way to go about it?
Don't store member data in localStorage at all so as to keep the data fresh. Just call the endpoint whenever it's needed.
Use sessionStorage instead?
Trigger a refetch when the user refreshes the page / app (although user may not know that they need to do this to update the data).
Suggestions?
Calling the endpoint whenever its needed is ideal if the data is going to change based on things outside of the user's control.
Session Storage is just local storage that gets wiped when the browsing session ends, you'll still have the exact same issue
This doesn't really solve the problem, and it's typically a bad user experience to require the user to perform regular maintenance tasks in order to use your application to the best of its ability
I'd go with just getting the data fresh.
At a high level, you have two choices:
Poll (periodically call the back end to refresh the data)
Open a persistent connection (like a web socket) to the server, and have the server push updates to clients.
The latter option would require a lot of changes, and it changes the scalability of your app, so the former choice seems like the most reasonable option for you.
It's smart to keep using localStorage so you have an offline copy of the data and aren't blocking rendering during page load; you can have a background periodic refresh process that doesn't disrupt the user in the meantime. If your data is mirrored in something like redux or context, then your UI could seemlessly update if/when the data changes.
If you do not know when member has been updated, don't store it. You should query the back end every time you need member. That is the only way to keep the data sync with your database.
I have struggled to find many resources on this online. I am developing an application that multiple users will be using at the same time. This means that one user may edit the database after another user has loaded the data from database. This means that this second user will not have an up to date view of the current state of the database. What is the best way to subscribe to database changes and deal with them. I am using a MEAN stack.
If you are trying to develop a real time system where changes are reflected instantly upon changes in database, you need to make use of web sockets. Since you are using Node.js as backend, see Socket.io
A good resource for implementation can be found here
However, if you plan on implementing web sockets, you will have to make significant changes to both your Node.js and Angular code.
Another method (which I would not recommend) is to make periodic api calls for those views which you want to reflect real time changes. You can make use of setInterval for this
Okay, let me start by saying that I know this is weird. I do.
But here goes:
Let's say I have an SQL database which stores my data. And let's say I don't have a choice in this, it has to be SQL. The application I'm building has somewhere in the region of 100,000 records in its database, and once every single record has been processed by the users of the application, they all go off and get sent to a different application entirely. So for a short period of time, this application will be in use, and then stops being used until the same time next year. While the application is in use, no external sources will be touching the database at all.
When the (Node) server starts up, it loads everything from the database, into an object literal on the server.
The client-side of this application, on a very basic level, makes requests (to an API on the server) for data, and sends updated versions of records back to the server once they've been processed.
So here's where it gets weird: Let's say I don't want to have the client-side application have to directly retrieve records from the database, nor do I want it to be able to write to them. So the data from the entire database already exists in memory on the server. There's a module on the server that can handle changing the representation of that data already (again, because the client application only interacts with APIs on the server, the database module exists to facilitate this).
Multiple users access the system at once, but due to the way the system works, it is not possible for two users to be sent the same record, so two users will never be sending an update back for the same record (records are processed individually, and sequentially).
So, let's say that I decided that, since I was already managing all of this data in memory on the server, I would just send an updated version of the current data, in its entirety, back to the database, every time it changed.
The question is, where does this rank on the crazy scale?
Performance, writing an entire database rather than single records, would obviously suffer. But, in a database that is only read from once (on start-up of the application), is that even a concern? If every operation other than "Write all the stuff when any of the stuff changes" happened in memory on the server, does it matter how long those updates actually take? If a new update to the database comes in whilst it's being updated, surely SQL will take care of this?
It feels like the correct way to do this of course, is to have each user directly getting their info from the database, and directly making updates to the database too (or at least interacting with API endpoints to make this happen), but, is just...not doing that, utter lunacy?
Like I said, I know it's weird, but other than the fact that "it feels kind of wrong", I'm not sure I'm convinced that it is in fact entirely wrong. So I figured that this place would have an opinion.
The way that I think it currently works is:
[SQL DB] is updated whenever a change happens on {in-memory DB}
{in-memory DB} is updated in various ways based on API calls to the server
makes requests for data, and sends updates to data, both of which are processed on the in-memory DB
Multiple requests can happen at the same time from the application, but mutliple users can not see the same record, because records are allocated to a given user before they're sent
Multiple updates can come from multiple users, each of which ultimately ends in the entire SQL database being saved to with the contents of the in-memory DB.
(Note: I'm not saying "is this the best way to do this". I'm just asking, is there a significant argument for caring about the performance of a database being written to, if it's not going to be read from again unless the server needs to be restarted)
What I think that I would do, in this situation, is to add an attribute to each cached record to indicate that the record is "dirty." In other words, that something has been done to it, by someone, since it was originally read from the database.
(You could also add an attribute that indicates that someone "has this particular record 'checked-out,'" so that you can be sure that two users are no updating the same record at the same time.)
At some convenient moment, you can then walk through the collection, posting the "dirty" records back to the database. Use an SQL Transaction, not only for efficiency but also to be sure that the final update to the database is atomic.
You will need to be very mindful of the possibility of race-conditions. One possible strategy is to use a Unix timestamp as a "dirty" indicator. A record is selected for posting to the database only if its "dirty-time" is greater-than or equal-to the timestamp when the commit-process was last run.
(And, P.S.: "no, I've seen even 'weirder' things than this, in all my crazy years in this crazy business ...)
I have a CQRS application with eventual consistency between the event store and the read model. In it I have a list of items and under the list a "Create new" button. When a user successfully creates a new item he is directed back to the list but since the read model has not been updated yet (eventual consistency) the item is missing in the list.
I want to fake the entry in the list until the read model has been updated.
How do I best do that and how do I remove it when the new item is present in the actual list? I expect delays of about 60 seconds for the read model to catch up.
I do realize that there are simpler ways to achieve this behavior without CQRS but the rest of the application really benefits from CQRS.
If it matters the application is a c# mvc4 application. I've been thinking of solutions involving HTML5 Web Storage but want to know what the best practice is for solving this kind of problem.
In this situation, you can present the result in the UI with total confidence. There is no difference in presenting this information directly and reading it from the read model.
Your domain objects are up to date with the UI and that's what really matters here. Moreover, if you valid correctly your AR state in every operation and you keep track of the concurrency with the AR's version then you're safe and your model will be protected against invalid operations.
At the end, what are the probability of your UI going out of sync? This can happen if you there are many users modifying the information you're displaying at the same time. This can be avoided by creating task based UI and following the rule 'one command/operation in the AR per request'.
The read model can be unsynced until the denormalizers do their job.
In the other hand, if the command will generate a conversation (long running operation) between a saga and AR's then you cannot do this and must warn the user about it.
It doesn't matter that's a asp.net mvc app. The only solution I see, besides just telling the user to wait a bit, is to have another but this time synchronous event handler that generate the same model (of course the actual model generation should be encapsulated in a service) and sends it to a memory cache.
Being everything in memory makes it very fast and being synchronous means it's automatically executed before the request ends. I'm assuming the command is executed syncronously too.
Then in your query repository you also consider results from cache, removing it if that result is already returned by the db.
Personally, for things that I know I want to be available to the user and where the read model generation is trivial, I would use only synchronous event handlers. The user doesn't mind waiting a few seconds when submitting something and if updating a read model takes a few seconds, you know you have a backend problem.
I see that eventual consistency is applicable to application only if application environment has multiple front-end servers hosting application and all these servers has own copy of read model. All servers uses same copy of event store.
When something is changed to event store, read model that is used to read result to user must be updated in sync with event store. Rest of servers and read models managed by them can be updated using eventual consistency.
This way result to user (list of items) can be read from local read model copy because it is already updated in sync. No need for special complex fake updates/rollbacks.
Only case when user can see incomplete list is that user hits F5 to refresh list after update change and load balancing directs user request to front-end server which read model is not yet updated (60 second delay), but this can be avoided so that load balancing does not change users server in middle of session.
So, if application has only one front-end server, eventual consistency is not very usable or it does not give any benefits without some special fake updates/rollbacks with read model...
I'm building web chat application using Code Igniter. In this chat app, there is only one channel / room. I'd like to know what's the best practice to store data whether using database or file in order to save bandwith and page load.
p.s : I use javascript setInterval to load chat div every x seconds.
if you want the file you could use reverse-ajax/comet for it is faster and take lesser bandwidth cause it uses long pooling
if you use database and use ajax it is slower because it always updates for new chat message in your database get it from there take the rows and displays it which takes time and I think more bandwidth