Is it wrong to make multiple ajax simultaneously requests to different endpoints of a REST API that end up modifying the same resource?
Note: each endpoint will modify different properties.
For example, let's assume that one endpoint modifies some properties for an order, like order_date and amount and another endpoint set's the link between the same order and a customer by changing the customer_id value from the orders table (I know that maybe this is not the best example, all these updates can be done with one endpoint).
Thanks in advance!
This is totally a requirements based question. It is generally a bad idea to have a single resource be changed by multiple processes, but this ONLY matters if there is a consistency relationship between the data. Consider some of the following questions:
If one or more of the AJAX calls fails does will that break your application? If it will, then yes, this is a bad idea. Will your application carry on regardless of what data you have at any given time? If so, then no this doesn't matter.
Take some time to figure out what dependencies you have between your data calls and you will get your answer.
what you are describing is not a shared resource even if it is stored in the same object because you are modifying different properties however take great care when using same object. if your requests to the server depends on the properties that are modified by the other request.
in general its not a good idea to use the same object to store data that is modified by more than one asynchronous function even if the properties are different. it makes your code confusing and harder to maintain since you have to manually coordinate your function calls to prevent race condition.
there are better ways to manage your asynchronous code using Promises or Observables
It's a bad idea in general. But if your code is small and you can manage it then you can do it though its not recommended.
In the long run, it will cause you many problems confusion, maintaining code, consistency etc.
And if in any case another developer has to manage your code, It will be more confusing and tough for him.
In programming always keep things flexible and think in long run. Your requirements can change in future , what will you do then? write the whole program again? This is one thing , you also want to avoid.
Related
I have yet to find a relatively good solution for this. Maybe the community can help?
I'm pulling data into my meteor app from some restful end points. One builds on the other. For example..I hit one end point and get a collection of authors. then I need to hit a second endpoint to pull the books each of the authors have written.
Right now I have two separate publish functions on the server side to get the sets of data, however the second one relies on the data from the first. (My initial foray in my app was simply to do it all in one publish, but this felt like not the best architecture)
Is there any way to subscribe to another publish from within another publish server side? Or, some other method of checking that i can do?
So far the internet and stack overflow have yielded little results. I am aware of the publishComposite packages available.. but they seem relatively heavy handed and don't necessarily seem applicable to what I'm trying to do. Any advice would be greatly appreciated
i suggest a divide and conquer strategy. you have basically 2 questions to answer:
for the collections, am i going to do a client-side or server-side join?
what drives calling the remote service to get the new data?
i think you can build these pieces separately and tie them together with the db and Meteor's reactivity.
e.g. you can start by writing the code that hits the remote REST APIs. i think the strategy there is to make the authors call, get the data, then make the books calls. i would do that in one function, tied together with promises. when the book data returns, write it and the authors data to their respective collections (if you don't already have that data), ensuring the foreign keys are intact. now you can tie that function to a button press, and that part is done for now.
next you can move on to the collections and publishing that data. you'll have to decide, as i mentioned, where to do that join. but do the publish(es) in such a way that, per standard Meteor practice, when the collections update in the db, your client gets the updated data.
at this point, you can test everything is storing correctly and updating reactively when you push the button.
the last piece is to decide what drives the API call, to replace the button push. as i'd mentioned in the comments, perhaps a cron job, but maybe there's something else going on in your app that makes it more natural. the danger of putting in the publish, as i think you already know, is that you could get 50 simultaneous subscribes, and you don't want to hit that REST API 50x.
Is there any practical difference between keeping several simple (plain) subscriptions and keeping a single complex (many levels) one? (with publish-composite, for example)
Seems to me that there shouldn't be any difference, but I wanted to be sure. I prefer sticking to plain subs as it seems to make the code clearer in highly modular projects, but only if that wouldn't bring any performance or scalability issues.
So, can someone help me?
There are two key differences in doing several plain subscriptions vs. keeping complex composite subscription
1) Exposure/Privacy
A composite subscription allows you to perform joins/filters on the server side to ensure that you only send data that the current user has authority to see. You don't want to expose your entire database to the client. Keep in mind that even if your UI is not showing the data, the user can go into the console and grab all the data that your server publishes.
2) Client performance
Performing joins/filters on the client can be expensive if you have a large dataset. This is of course dependent on your application. Additionally, if the database is constantly being updated, and those updates should not be visible to the user; you will constantly need to transfer the updates to the client without deriving benefits from the network expense.
I think this question can't be given a precise answer without more details specific to your application. That being said, I think it's an important question so I'll outline of some things to consider.
To be clear, the focus of this answer will be debating the relative merits of server-side and client-side reactive joins.
decide if you need reactivity
You can produce a simple join of multiple collections without any reactivity in the publisher (see the first example from the article above). Depending on the nature of the problem, it may be that you don't really need a reactive join. Imagine you are joining comments and authors, but your app always has all of the possible authors published already. In that case the fundamental flaw in non-reactive joins (missing child documents after a new parent) won't exist, so a reactive publication is redundant.
consider your security model
As I mention in my article on template joins, server-side joins have the advantage of bundling all of your data together, whereas client-joins require more granular publishers. Consider the security implications of having a publisher like commentsAndAuthors vs two generic implementations of comments and users. The latter suggests that anyone could request an array of user documents without context.
server joins can be CPU and memory hogs
Look carefully at the implementation of the library you are considering for your server-side joins. Some of them use observe which requires that each complete document in the dependency chain be kept in memory. Others are implemented only on observeChanges which is more efficient but makes packages a bit less flexible in what they can do.
look for observer reuse
One of your goals should be to reuse your observers. In other words, given that you will have S concurrent subscriptions you will only end up doing ~(S-I) work where I is the number of identical observers across clients. Depending on the nature of your subscriptions, you may see greater observer reuse with more granular subscriptions, but this is very application specific.
beware of latency
A big advantage of server-side joins is that they deliver all of the documents effectively at once. Compare that to a client join which must wait for each set of parent documents to arrive before activating the child subscriptions. A N-level client-join would have N round-trips before the initial set of documents will be delivered to the client.
conclusion
You'll need to take all of the above into consideration when deciding which technique to use for each of your publications. The reality is that benchmarking a live app on something like kadira is the only way to arrive at a conclusive answer.
I am currently implementing a graph visualisation tool using lift on the server side and d3 ( a javascript visualisation framework) for all the visualisation. The problem I have is that in the script I want to get session dependent data from the server.
So basically, my objective is to write lift-valid ajax callbacks in a static js script.
What I have tried so far
If you feel that the best solution is one that I already tried feel free to post a detailed answer telling me how to use it exactly and how it completely solves my problem.
Write the ajax callback in another script using lift and call it from the main script
This solution, which is similar to a hidden text input is probably the more likely to work. However it is not elegant and it would mean that I would have to load a lot of scripts on load, which is not really conveniant.
This seems to be one of the prefered solutions in the lift community as explained in this discussion on the mailing list.
REST interface
Usually what one would do to get data from a javascript function in lift is to create a REST interface. However this interface will not be linked to any session. This is the solution I got from my previous question: Get json data in d3 from lift snippet
Give function as argument of script
Another solution would be to give the ajaxcallback as an argument of the main script called to generate my graph. However I expect to have a lot of callbacks and I don't want to have to mess with the arguments of my script.
Write the whole script in lift and then serve it to the client
This solution can be elegant, however my script is very long and I would really prefer that it remainss static.
What I want
On client side
While reviewing the source code of my webpage I found that the callback for an ajaxSelect is:
<select onchange="liftAjax.lift_ajaxHandler('F966066257023LYKF4=' + encodeURIComponent(this.value), null, null, null)" name="F96606625703QXTSWU" id="node_delete" class="input">
Moreover, there is a variable containing the state of the page in the end of the webpage:
var lift_page = "F96606625700QRXLDO";
So, I am wondering if it is possible to simulate that my ajaxcall is valid using this liftAjax.lift_ajaxHandler function. However I don't know the exact synthax to use.
On server side
Since I "forged" a request on client side, I would now like to get the request on client side and to dispatch it to the correct function. This is where the LiftRules.dispatch object seems the best solution: when it is called, all the session management has been made (the request is authentified and linked to a session), however I don't know how to write the correct piece of code in the append function.
Remark
In lift all names of variables are changed to a random string in order to increase the security, I would like to have the same behavior in my application even if that will probably mean that I will have to "give" the javascript these values. However an array of 15 string values is still a better tradeoff than 15 functions as argument of a javascript function.
Edit
While following my research I found this page : Mapping server functions to client actions which somehow explains the goal of named functions even if it stil didn't lead me to a working solution.
Quick Answer
Rest in Lift does not have to be stateless. If you register your RestHelper with LiftRules.dispatch.append, then it will be handled statefully and Session information will be available through the S object as usual.
Long Answer
Since you seem interested, and it's come up on SO before, here's a more detailed explanation of how server-side functions are registered and called in Lift. If you haven't worked with Lift for some time, look away. What follows should not in any way be used to evaluate Lift or its complexity. This is purely library developer level stuff and a majority of Lift users go about their development blissfully unaware of it.
How it works
When you create stateful callbacks, typically by using the methods within the SHtml object, what you are really doing is registering objects of type S.AFuncHolder within the context of the users session, each with a unique ID. The unique ID that was generated during this process is what you're seeing when you come across a pattern like F96606625700QRXLDO. When data is submitted, via form post, ajax, or whatever, Lift will check the request for these function ids and execute the associated function if they exist. There are several helpers that provide more specific types of AFuncHolder, like S.SFuncHolder (accepts a single string query parameter) and S.BinFuncHolder (parameter is multipart form data) but they all return Any and behind the scenes Lift will collect those return values to create the proper type of response. A JsCmd, for instance, will result in a JavaScriptResponse that executes the command. You can also return a LiftResponse directly.
How to use it
AFuncHolders are registered using the S.fmapFunc method. You'd call it like this
S.fmapFunc(SFuncHolder({ (str: String) =>
doSomethingAwesomeWithAString(str)
}))(id => <input type="text" name={id} value=""/>)
The first parameter is your function, wrapped in the proper *FuncHolder type and the second parameter is a function that takes the generated id and outputs something. The something that gets output is what you will include on the page. It should somehow result in the id being sent to the server as a query parameter so that your function is executed.
Putting it all together
You could use the above to make your own Ajax calls, but when Lift makes an ajax call there are a few other considerations:
1) Most browsers only allow so many simultaneous connections to a given domain. Three seems to be the magic number.
2) AFuncHolders will often close over the scope of the snippet they are contained within and if multiple ajax requests are handled at once, each in its own thread, bad things can happen.
To combat these issues, the liftAjax.lift_ajaxHandler function queues each ajax request, ensuring that only one at a time is sent to the server.
The drawback to this approach is that it can make it difficult to make an Ajax call where the result needs to be passed to a callback. JQuery autocomplete, for instance, provides a callback function when input changes that accepts a list of matches. If you are manually calling LiftAjax.lift_ajaxHandler though, you can provide your own callback functions for success & error and I would recommend that you look at the source of those functions in your browser for more information on how they work.
There's actually more to it, like how Lift restores RequestVars on ajax callbacks (which is where the lift_page comes in, but that's about all I'm prepared to explain over coffee on a Saturday morning :)
Good luck with your app!
So, I'm trying to improve my javascript skills and get into using objects more (and correctly), so please bear with me, here.
So, take this example: http://jsfiddle.net/rootyb/mhYbw/
Here, I have a separate method for each of the following:
Loading the ajax data
Using the loaded ajax data
Obviously, I have to wait until the load is completed before I use the data, so I'm accessing it as a callback.
As I have it now, it works. I don't like adding the initData callback directly into the loadData method, though. What if I want to load data and do something to it before I use it? What if I have more methods to run when processing the data? Chaining this way would get unreadable pretty quickly, IMO.
What's a better, more modular way of doing this?
I'd prefer something that doesn't rely on jQuery (if there even is a magical jQuery way), for the sake of learning.
(Also, I'm sure I'm doing some other things horribly in this example. Please feel free to point out other mistakes I'm making, too. I'm going through Douglas Crockford's Javascript - The Good Parts, and even for a rank amateur, it's made a lot of sense, but I still haven't wrapped my head around it all)
Thanks!
I don't see a lot that should be different. I made an updated version of the fiddle here.
A few points I have changed though:
Use the var keyword for local variables e.g., self.
Don't add a temporary state as an object's state e.g., ajaxData, since you are likely to use it only once.
Encapsulate as much as possible: Instead of calling loadData with the object ajaxURL, let the object decide from which URL it should load its data.
One last remark: Don't try to meet requirements you don't have yet, even if they might come up in the future (I'm referring to your "What if...?" questions). If you try, you will most likely find out that you either don't need that functionality, or the requirements are slightly different from what you expected them to be in the past. If you have a new requirement, you can always refactor your model to meet them. So, design for change, but not for potential change.
I'm building a web app with a lot of ajax calls to be made.
Should I be trying to keep a small number of methods, and just pass in information about what type of request it is, and then switch based on that type inside the method
or
Many smaller methods, so don't have to pass in type, but more code to write setting up each method.
Currently I'm passing type from the id of the element being interacted with in the html, and then this tells me what I'm trying to do
row-action-data-id (I then split this in the functions, to work out what needs doing)
Are there any best practices for patterns like this?
its a judgement call. you always want to refactor out any duplicate code as much as possible but its important that your code is readable and maintainable.