what is node LRU cache? Anyone can explain how to implement it? Lets say I have three layers, client-midlayer(handle calls)-backend(mongoDB), and the LRU cache should be implemented in the midlayer.
Would be nice is there is a simple example just showing how it works! Thanks in advance.
There's an example on how to use it within the source repository: https://github.com/isaacs/node-lru-cache/tree/master/test
I'm assuming you want the LRU to persist to MongoDB? If that's the case, you'll need to extend or rewrite the library, as it looks like a simple in-memory LRU cache module at first glance.
You'll also want to consider Redis' sorted set for this. If you have multiple frontend server instances, keeping a single LRU instance per server will lead to them getting out of sync. Redis's sorted sets are a natural fit for this problem and are extremely fast.
You can use a timestamp to keep them ordered by most recent, and the list can be read and updated atomically via transactions. It will definitely suit the purpose of a cache well.
Related
I'm working on a vue app that uses vuex and gets objects from an api. The tables have paging and fetch batches of objects from the api, sometimes including related entities as nested objects. The UI allows some editing via inputs in a table, and adds via modals.
When the user wants to save all changes, I have a problem: how do I know what to patch via the api?
Idea 1: capture every change on every input and mark the object being edited as dirty
Idea 2: make a deep copy of the data after the fetch, and do a deep comparison to find out what's dirty
Idea 3: this is my question: please tell me that idea 3 exists and it's better than 1 or 2!
If the answer isn't idea 3, I'm really hoping it's not idea 1. There are so many inputs to attach change handlers to, and if the user edits something, then re-edits back to its original value, I'll have marked something dirty that really isn't.
The deep copy / deep compare at least isolates the problem to two places in code, but my sense is that there must be a better way. If this is the answer (also hoping not), do I build the deep copy / deep compare myself, or is there a package for it?
It looks like you have the final state on the UI and want to persist it on the server. Instead of sending over the delta - I would just send over the full final state and overwrite whatever there was on server side
So if you have user settings - instead of sending what settings were toggled - just send over the "this is what the new set of settings is"
Heavy stuff needs to be done on the server rather than the client most of the time. So I'll follow the answer given by Asad. You're not supposed to make huge objects diffs, it's 2022 so we need to think about performance.
Of course, it also depends of your app, what this is all about. Maybe your API guy is opposed to it for a specific reason (not only related to performance). Setup a meeting with your team/PO and check what is feasible.
You can always make something on your side too, looping on all inputs should be feasible without manually doing that yourself.
TLDR: this needs to be a discussion in your company with your very specific constrains/limitations. All "reasonable solutions" are already listed and you will probably not be able to go further because those kind of "opinion based" questions are not allowed anyway on SO.
Is it wrong to make multiple ajax simultaneously requests to different endpoints of a REST API that end up modifying the same resource?
Note: each endpoint will modify different properties.
For example, let's assume that one endpoint modifies some properties for an order, like order_date and amount and another endpoint set's the link between the same order and a customer by changing the customer_id value from the orders table (I know that maybe this is not the best example, all these updates can be done with one endpoint).
Thanks in advance!
This is totally a requirements based question. It is generally a bad idea to have a single resource be changed by multiple processes, but this ONLY matters if there is a consistency relationship between the data. Consider some of the following questions:
If one or more of the AJAX calls fails does will that break your application? If it will, then yes, this is a bad idea. Will your application carry on regardless of what data you have at any given time? If so, then no this doesn't matter.
Take some time to figure out what dependencies you have between your data calls and you will get your answer.
what you are describing is not a shared resource even if it is stored in the same object because you are modifying different properties however take great care when using same object. if your requests to the server depends on the properties that are modified by the other request.
in general its not a good idea to use the same object to store data that is modified by more than one asynchronous function even if the properties are different. it makes your code confusing and harder to maintain since you have to manually coordinate your function calls to prevent race condition.
there are better ways to manage your asynchronous code using Promises or Observables
It's a bad idea in general. But if your code is small and you can manage it then you can do it though its not recommended.
In the long run, it will cause you many problems confusion, maintaining code, consistency etc.
And if in any case another developer has to manage your code, It will be more confusing and tough for him.
In programming always keep things flexible and think in long run. Your requirements can change in future , what will you do then? write the whole program again? This is one thing , you also want to avoid.
I have yet to find a relatively good solution for this. Maybe the community can help?
I'm pulling data into my meteor app from some restful end points. One builds on the other. For example..I hit one end point and get a collection of authors. then I need to hit a second endpoint to pull the books each of the authors have written.
Right now I have two separate publish functions on the server side to get the sets of data, however the second one relies on the data from the first. (My initial foray in my app was simply to do it all in one publish, but this felt like not the best architecture)
Is there any way to subscribe to another publish from within another publish server side? Or, some other method of checking that i can do?
So far the internet and stack overflow have yielded little results. I am aware of the publishComposite packages available.. but they seem relatively heavy handed and don't necessarily seem applicable to what I'm trying to do. Any advice would be greatly appreciated
i suggest a divide and conquer strategy. you have basically 2 questions to answer:
for the collections, am i going to do a client-side or server-side join?
what drives calling the remote service to get the new data?
i think you can build these pieces separately and tie them together with the db and Meteor's reactivity.
e.g. you can start by writing the code that hits the remote REST APIs. i think the strategy there is to make the authors call, get the data, then make the books calls. i would do that in one function, tied together with promises. when the book data returns, write it and the authors data to their respective collections (if you don't already have that data), ensuring the foreign keys are intact. now you can tie that function to a button press, and that part is done for now.
next you can move on to the collections and publishing that data. you'll have to decide, as i mentioned, where to do that join. but do the publish(es) in such a way that, per standard Meteor practice, when the collections update in the db, your client gets the updated data.
at this point, you can test everything is storing correctly and updating reactively when you push the button.
the last piece is to decide what drives the API call, to replace the button push. as i'd mentioned in the comments, perhaps a cron job, but maybe there's something else going on in your app that makes it more natural. the danger of putting in the publish, as i think you already know, is that you could get 50 simultaneous subscribes, and you don't want to hit that REST API 50x.
I'm using jasmine-node to test my API, and it has worked great for my GET routes. Now, however, I need to test some POSTs and I'm not sure how to go about this without changing my database.
One thought I had was to reset whatever value I change at the end of each spec.
Is this reasonable or is there a better way to go about testing POST requests to my API?
Wrap anything that modifies your database into a transaction. You can have your database changes and then rollback after each test.
usually you are supposed to have a test database, so modify that one is not a big issue. also, a general approach would be not to rely on predefined values on the database (i.e, the GET always request the SAME object..) but try with different objects each time. (using predefined objects may hide problems when the data is slighty different..).
in order to implement the second strategy, you can execute a test with a POST with pseudo-random data to create a new object, and use the returned ID to feed the following GET, UPDATE and finally the DELETE tests.
Just make a duplicate processing page/function and send the data to that for debugging. Comment out anything that makes changes to the database.
Alternatively, pass a variable in your call such as "debug" and have an if/else section in your original function for debugging, ignoring the rest of the function.
Another alternative still is to duplicate your database table and name it debug table. It will have the same structure as your original. Send the test data to it instead and it won't change your original database tables.
I'm pretty sure that you've come up with some solution for your problem already.
BUT, if you don't, the Angular $httpBackend will solve your problem. It is a
Fake HTTP backend implementation suitable for unit testing applications that use the $http service.
Database access is often the slowest part of an application, so to accommodate that are there any techniques to respond to a request by:
sending a static HTML structure
running a query on the data store
once the data returns from the query, then push the data to the client (perhaps in JSON)
use JavaScript to update the HTML by adding text or changing value attributes
First, is this a bad idea? Having not found anything resembling this in my research over the last couple days I assume it is a bad one. However, if it is not, is it possible? And are there established techniques for doing this?
As has already been said, this is basically what an "ajax application" is. They are very easy to write nowadays, mostly because of the number of frameworks out there.
Check out http://sproutcore.com, http://javascriptmvc.com/ and http://cappuccino.org/ Those are "heavyweight" solutions, but depending on what you are building, that may suit your needs perfectly.
If those don't look like the sort of thing you want, I would take a look at http://dojotoolkit.org It is a javascript framework that pretty much handles everything you could imagine wanting to do in an integrated sort of way.
If you are already using jquery, the best bet may be something like http://documentcloud.github.com/backbone/, or http://knockoutjs.com/, or http://sammyjs.org/.
http://api.jquery.com/category/plugins/templates/
http://stanlemon.net/projects/jquery-templates.html