FIRST: I realize this question has been asked here: in ExtJS, is it better to call Model.save() or Store.Sync()? - however I wish to examine this further, specifically regarding minimizing XHR's and unnecessary overhead on both the client and server. I do not feel either of these points were addressed in the linked question.
I have a somewhat large application designed for enterprise resource management, consisting of many models, views and controllers. I handle all responses from my server by establishing a listener to Ext.Ajax requestComplete and requestException events. I took this approach rather than writing duplicate event handlers on every model's proxy afterRequest event. This enables me to have all of my back-end (using the Zend Framework) controllers responding with three parameters: success, message and data.
After a successful request (i.e., HTTP 200), the method run for requestComplete will inspect the JSON response for the aforementioned parameters. If success is false, it is expected that there will be an error message contained in message, which is then displayed to the user (e.g. 'There was a problem saving that product. Invalid product name'). If success is true, action is taken depending on the type of request, i.e., Create, Read, Update or Destroy. After a successful create, the new record is added to the appropriate data store, after delete the record is destroyed, and so forth.
I chose to take this approach rather than adding records to a store and calling the store's sync method in order to minimize XHR's and otherwise round trips. My current means of saving/updating data is to send the request to the backend and react to the result on the Ext front end. I do this by populating a model with data and calling model.save() for create/update requests, or model.destroy() to remove the data.
I found that when adding/updating/removing records from a store, then calling store.sync(), I would have to react to server's response in a way that felt awkward. Take for example, deleting a record:
First, remove the record from the store via store.remove()
Invoke store.sync() as I have store's autoSync set to false.
This fires the AJAX destroy request from the store's model proxy.
Here's where it gets weird.... if there is an error on the server while dropping the row from the database, the response will return success: false, however the record will have already been removed from the ExtJS Data Store.
At this point, I can either call store.sync(), store.load() (both requiring a round trip) or get the record from the request and add it back to the store followed by a commitChanges() to avoid calling an additional sync/load and thus avoiding an unnecessary round trip.
The same goes for adding records, if the server fails somewhere while adding data to the database, the record is still in the ExtJS store and must be removed manually to avoid a round trip with store.sync() or store.load().
In order to avoid this whole issue, as I previously explained, I instantiate one of my model objects (e.g. a Product model), populate it with data, and call myModel.save(). This, in turn, invokes the proxy's create or update depending on the ID of the model, and fires the appropriate AJAX request. In the event that the back-end fails, the front-end store is still unchanged. On successful requests (read: success: true, not HTTP 200), I manually add the record to the store and invoke store.commitChanges(true), effectively syncing the store with the database without an additional round trip and avoiding unnecessary overhead. For all requests, the server will respond with the new/modified data as well as a success parameter, and conditionally a message to display on the client.
Am I missing something here, or is this approach a good way to minimize XHR's and server/client overhead? I am happy to provide example code should that be requested, however I feel that this is a rather general concept with fundamental code.
I think you have eloquently argued your position. I don't see anything wrong with the position you have taken. My only reproach is to point out that autoSync setting on a store that backs editable grid is a far less verbose way of accomplishing the task, albeit with less control.
To add, the overhead which you point out is typically due to the unexpected or I would call edge cases that may need special handling or an extra refresh of data. You could add listeners for those specific cases and leave the rest functioning with terse defaults.
Related
I have two models to call. In the first model I include somedate like this.
this.store.query('comment',{
include : 'person,address'
});
And in the second call I include the same details that already stored in the store.
this.store.query('post',{
include : 'person,address'
});
So, the API call takes a lot of time to resolve. Is there any way I can use the first API include data in the second API call to create a relationship between those two models(person, address).
This would save a lot of time for me.
Note: Examples are testing purpose only.
You are using the query() method of Ember Data's store. It expects two arguments: the model name as first argument and the query as a second argument. Last one is directly passed to your backend as part of the request. The responsible code is quite simple: https://github.com/emberjs/data/blob/v3.10.0/addon/adapters/rest.js#L535-L560
If you are using default JSONAPIAdapter the requests executed by your method calls look like this:
this.store.query('comment', { include: 'person,address' });
=> GET /comments?include=person,address
this.store.query('post', { include: 'person,address' });
=> GET /posts?include=person,address
The API does not know from that request that the client already has some of the person and address records cached locally. Ember Data does not include that information by default. You could customize your Adapter two do it but I wouldn't recommend so - especially cause that may blow up the request size and reduce the cache hit rate by a fair amount. Also you may want to reload the locally cached records.
If you expect two have most of the related records already cached locally, you may simply not want to ask the server to include them? In that case it might be cheaper to load them afterwards in a coalesced request.
I am very new to angular and this one is striking in my head a lot. So scenario is : Suppose angular http returns me model containing array of object like:
[{name:"Ankur",lastName:"aggarwal",updation_date:"23-08-2014"},{name:"xyz",lastName:"abc",updation_date:"29-08-2013"}]
Out of this updation_date is not required but coming for some reason. So is it right to update the array with third object without creation date like {name:"def",lastName:"jbc"} . Is it a good practice or array object model should be consistent?
Also what should be the approach? Update the model array first so binding take place instantly, then send it to the server or send it to server and get the updated object? Might be basic one but very new to angular and JMVC.
Is it a good practice or array object model should be consistent?
It depends , if backend expects all array entries to contain updation_date then you have no choice and are forced to add some sensible default value. However, if possible then avoid sending too much unnecessary data from backend since it impacts application performance(like data transfer, adding unnecessary logic to generate sensible default values, etc.)
Update the model array first so binding take place instantly, then
send it to the server or send it to server and get the updated object?
If the nature of your application permits reverting model value when save is unsuccessful then just go ahead with
0.Perform data validation, and make sure valid data is supplied to the backend.
1.Update model.
2.Send data to backend
3.If something bad happens then execute error handling depending on app needs
However if presenting consistent value in the GUI is uttermost importance(e.g. finance applications) then
0.Perform data validation, and make sure valid data is supplied to the backend.
1.Show some message to user like "saving"
2.Perform ajax request
3.If successful, update model, else execute error handling depending on app needs
It depend on your error handling.
As saving on the server-side might be not successful, you should take it into consideration.
My approach is to
Update angular object immediately
Then send AJAX request to server and
Wait for response. If error happen during server save, you shoulde:
revert values,
repeat AJAX
show information to user.
I have a question in terms of code and NOT user experience, I have the JS:
$(document).on( "click", "input:radio,input:checkbox", function() {
getContent($(this).parent(),0);
});
The above JS gets the contents from radios and checkboxes, and it refreshes the page to show dependencies. For example if I check on yes, and the dependency is on yes, show text box, the above works!
What I want to know is, if there is a better way to do the same thing, but in a more friendly way, as this is at times, making the pages slow. Especially if I do a lot of ticks/checks in one go, I miss a few, as the parent refreshes!
If you have to hit your server to getContent() then it will automatically be slow.
However, you can save a lot if you send all the elements once instead of hitting the server each time a change is made.
Yet, if creating one super large page is not an option, then you need to keep your getContent() function, but there is one possible solution, in case you did not already implement such, which is to cache all the data that you queried earlier.
So you could have an object (a map) which has keys defining the data you're interested in. If the key is defined, then the data is already available and your return and use that data directly from the cache. Otherwise, you have to hit the server.
One thing to do, you mentioned slowness as you 'tick' things back and forth, is to not send more than one request at a time to the server (with a timeout in case the server never replies). So the process here is:
Need data 'xyz'
Is that data already cached? if yes, then skip step (3 and 4)
If a request being worked on? if yes, push the data on the request stack and return
Send a request to the server, which blocks any further request until answer for 'xyz' is received
Receive the answer and cache the data in an object (map) and release the request queue
Make use of data as required
I check the request queue, if not empty pop the next request and start processing from (2)
The request process is expected to be run on a timer because (1) it can time out and (2) it need to run in the background (not GUI preemptive)
I have two REST-ful resources on my server:
/someEntry/{id}
Response:
{
someInfoAboutEntry: ...,
entryTypeUrl: "/entryType/12345"
}
and
/entryType/{id}
Response:
{
someInfoAboutEntryType: ...
}
The entryTypeUrl is used to fetch additional data about the type of this entry from the different URL. It will be bound to some "Detailed information" button near each entry. There can be many (let's say 100) entries, while there are only 5 types (so most entries point to same entryTypeUrl.
I'm building a Javascript client to access those resources. Should I cache entryType results in my Javascript code, or should I rely on the browser to cache the data for me and dispatch XHR requests every time user clicks the "Detailed information" button?
As far as I see it, both approaches should work just fine. The second one (always dispatching requests) will result in clearer code though. Should I stick to it, or are there some points I'm not aware of?
Thanks in advance.
I would definitely let the browser manage the caching, rather than writing a custom caching layer yourself.
This way you have less code to write and maintain, and you allow the server to dictate (via its HTTP headers) whether the response should be cached or not. If you write your own caching code you remove the ability to refetch stale data - which you would get for free from the browser.
Frequently when I work on AJAX applications, I'll pass around parameters via POST. Certain parts of the application might send the same number of parameters or the same set of data, but depending on a custom parameter I pass, it may do something completely different (such as delete instead of insert or update). When sending data, I'll usually do something like this:
$.post("somepage.php", {action: "complete", somedata: data, moredata: anotherdata}, function(data, status) {
if(status == "success") {
//do something
}
});
On another part of the application, I might have similar code but instead setting the action property to deny or something application specific that will instead trigger code to delete or move data on the server side.
I've heard about tools that let you modify POST requests and the data associated with them, but I've only used one such tool called Tamper Data for Firefox. I know the chances of someone modifying the data of a POST request is slim and even slimmer for them to change a key property to make the application do something different on the backend (such as changing action: "complete" to action: "deny"), but I'm sure it happens in day to day attacks on web applications. Can anyone suggest some good ways to avoid this kind of tampering? I've thought of a few ways that consist of checking if the action is wrong for the event being triggered and validating that along with everything else, but I can see that being an extra 100 lines of code for each part of the application that needs to have these kinds of requests protected.
You need to authorize clients making the AJAX call just like you would with normal requests. As long as the user has the rights to do what he is trying to do, there should be no problem.
You should also pass along an authentication token that you store in the users session, to protect against CSRF.
Your server can't trust anything it receives from the client. You can start establishing trust using sessions and authentication (make sure the user is who she says she is), SSL/TLS (prevent tampering from the network) and XSRF protection (make sure the action was carried out from html that you generated) as well as care to prevent XSS injection (make sure you control the way your html is generated). All these things can be handled by a server-side framework of good quality, but there are still many ways to mess up. So you should probably take steps to make sure the user can't do anything overly destructive for either party.