Angular ngResource $save Method Clears $resource Object - javascript

Using Angular 1.5.5 here:
Is there any way to tell Angular to ignore response body for particular requests (such as $save)? It drives me crazy that after I call $save, angular updates the model with the object returned by a server, which initially was supposed to be used to distinguish between different resolutions of the request. It results in unwanted form clear. Interestingly enough, this behaviour remains even if I send a 400 or 500 http status code.
In case you need more info, relevant code is below.
Controller:
'use strict';
angular
.module('app.operators')
.controller('OperatorNewController', OperatorNewController);
OperatorNewController.$inject = ['operatorsService', 'notify'];
function OperatorNewController(operatorsService, notify) {
var vm = this;
vm.done = done;
activate();
function activate() {
vm.operator = new operatorsService();
}
function done(form) {
if (form.$invalid) {
// do stuff
return false;
}
vm.operator.$save(function(response) {
if (response.success && response._id) {
$state.go('app.operators.details', {id: response._id}, { reload: true });
} else if (response.inactive) {
// do stuff
} else {
// do other stuff
}
}, function (error) {
// do other stuff
});
}
}
Service:
'use strict';
angular
.module('app.operators')
.service('operatorsService', operatorsService);
operatorsService.$inject = ['$resource'];
function operatorsService($resource) {
return $resource('/operators/:id/', {id: '#_id'}, {
'update': { method: 'PUT' }
});
}
Server request handler is also fairly simple:
.post('/', function (req, res) {
if (!req.operator.active) {
return res.status(500).json({ inactive: true, success: false });
}
// do stuff
return res.json({ success: true });
});
In either way I don't like the idea of having to send the entire object from server (particularily when it's a failed request), and even if I have to, I still need a way to send some extra data that will be ignored by Angular.
Your help is very much appreciated!

The $save method of the resource object empties and replaces the object with the results of the XHR POST results. To avoid this, use the .save method of the operatorsService:
//vm.operator.$save(function(response) {
vm.newOperator = operatorsService.save(vm.operator, function(response),
if (response.success && response._id) {
$state.go('app.operators.details', {id: response._id}, { reload: true });
} else if (response.inactive) {
// do stuff
} else {
// do other stuff
}
}, function (error) {
// do other stuff
});
UPDATE
It results in unwanted form clear. Interestingly enough, this behaviour remains even if I send a 400 or 500 http status code.
This behavior is NOT VERIFIED.
I created a PLNKR to attempt to verify this behavior and found that the $save method does not replace the resource object if the server returns a status of 400 or 500. However it does empty and replace the resource object if the XHR status code is 200 (OK).
The DEMO on PLNKR
It drives me crazy that after I call $save, angular updates the model with the object returned by a server
It helps to understand how browsers handle traditional submits from forms.
The default operation for a submit button uses method=get. The browser appends the form inputs to the URL as query parameters and executes an HTTP GET operation with that URL. The browser then clears the window or frame and loads the results from the server.
The default operation for method=post is to serializes the inputs and place them in the body of an HTTP POST. The browser then clears the window or frame and loads the results from the server.
In AngularJS the form directive cancels the browser default operation and executes the Angular Expression set by either the ng-submit or ng-click directive. All $resource instance methods including $get and $save, empty and replace the resource object with XHR results from the server if the XHR is successful. This is consistent with the way browsers traditionally handle forms.
In RESTful APIs, HTTP GET operations return the state of a server resource without changing it. HTTP POST operations add a new resource state to the server. APIs usually return the new resource state, with additional information such as ID, Location, timestamps, etc. Some RESTful APIs return a redirect (status 302 or 303) in which case browsers transparently do an HTTP GET using the new location. (This helps to Solve the Double Submission Problem.)
When designing RESTful APIs, it is important to understand how traditional browsers behave and the expectations of RESTful clients such as AngularJS ngResource.

Related

Refresh page after load on cache-first Service Worker

I'm currently considering adding service workers to a Web app I'm building.
This app is, essentially, a collection manager. You can CRUD items of various types and they are usually tightly linked together (e.g. A hasMany B hasMany C).
sw-toolbox offers a toolbox.fastest handler which goes to the cache and then to the network (in 99% of the cases, cache will be faster), updating the cache in the background. What I'm wondering is how you can be notified that there's a new version of the page available. My intent is to show the cached version and, then, if the network fetch got a newer version, to suggest to the user to refresh the page in order to see the latest edits. I saw something in a YouTube video a while ago but the presenter gives no clue of how to deal with this.
Is that possible? Is there some event handler or promise that I could bind to the request so that I know when the newer version is retrieved? I would then post a message to the page to show a notification.
If not, I know I can use toolbox.networkFirst along with a reasonable timeout to make the pages available even on Lie-Fi, but it's not as good.
I just stumbled accross the Mozilla Service Worker Cookbooj, which includes more or less what I wanted: https://serviceworke.rs/strategy-cache-update-and-refresh.html
Here are the relevant parts (not my code: copied here for convenience).
Fetch methods for the worker
// On fetch, use cache but update the entry with the latest contents from the server.
self.addEventListener('fetch', function(evt) {
console.log('The service worker is serving the asset.');
// You can use respondWith() to answer ASAP…
evt.respondWith(fromCache(evt.request));
// ...and waitUntil() to prevent the worker to be killed until the cache is updated.
evt.waitUntil(
update(evt.request)
// Finally, send a message to the client to inform it about the resource is up to date.
.then(refresh)
);
});
// Open the cache where the assets were stored and search for the requested resource. Notice that in case of no matching, the promise still resolves but it does with undefined as value.
function fromCache(request) {
return caches.open(CACHE).then(function (cache) {
return cache.match(request);
});
}
// Update consists in opening the cache, performing a network request and storing the new response data.
function update(request) {
return caches.open(CACHE).then(function (cache) {
return fetch(request).then(function (response) {
return cache.put(request, response.clone()).then(function () {
return response;
});
});
});
}
// Sends a message to the clients.
function refresh(response) {
return self.clients.matchAll().then(function (clients) {
clients.forEach(function (client) {
// Encode which resource has been updated. By including the ETag the client can check if the content has changed.
var message = {
type: 'refresh',
url: response.url,
// Notice not all servers return the ETag header. If this is not provided you should use other cache headers or rely on your own means to check if the content has changed.
eTag: response.headers.get('ETag')
};
// Tell the client about the update.
client.postMessage(JSON.stringify(message));
});
});
}
Handling of the "resource was updated" message
navigator.serviceWorker.onmessage = function (evt) {
var message = JSON.parse(evt.data);
var isRefresh = message.type === 'refresh';
var isAsset = message.url.includes('asset');
var lastETag = localStorage.currentETag;
// ETag header usually contains the hash of the resource so it is a very effective way of check for fresh content.
var isNew = lastETag !== message.eTag;
if (isRefresh && isAsset && isNew) {
// Escape the first time (when there is no ETag yet)
if (lastETag) {
// Inform the user about the update.
notice.hidden = false;
}
//For teaching purposes, although this information is in the offline cache and it could be retrieved from the service worker, keeping track of the header in the localStorage keeps the implementation simple.
localStorage.currentETag = message.eTag;
}
};

Error handling over websockets a design dessision

Im currently building a webapp that has two clear use cases.
Traditional client request data from server.
Client request a stream from the server after wich the server starts pushing data to the client.
Currently im implementing both 1 and 2 using json message passing over a websocket. However this has proven hard since I need to handcode lots of error handling since the client is not waiting for the response. It just sends the message hoping it will get a reply sometime.
Im using Js and react on the frontend and Clojure on the backend.
I have two questions regarding this.
Given the current design, what alternatives are there for error handling over a websocket?
Would it be smarter to split the two UC using rest for UC1 and websockets for UC2 then i could use something like fetch on the frontend for rest calls.
Update.
The current problem is not knowing how to build an async send function over websockets can match send messages and response messages.
Here's a scheme for doing request/response over socket.io. You could do this over plain webSocket, but you'd have to build a little more of the infrastructure yourself. This same library can be used in client and server:
function initRequestResponseSocket(socket, requestHandler) {
var cntr = 0;
var openResponses = {};
// send a request
socket.sendRequestResponse = function(data, fn) {
// put this data in a wrapper object that contains the request id
// save the callback function for this id
var id = cntr++;
openResponses[id] = fn;
socket.emit('requestMsg', {id: id, data: data});
}
// process a response message that comes back from a request
socket.on('responseMsg', function(wrapper) {
var id = wrapper.id, fn;
if (typeof id === "number" && typeof openResponses[id] === "function") {
fn = openResponses[id];
delete openResponses[id];
fn(wrapper.data);
}
});
// process a requestMsg
socket.on('requestMsg', function(wrapper) {
if (requestHandler && wrapper.id) {
requestHandler(wrapper.data, function(responseToSend) {
socket.emit('responseMsg', {id: wrapper.id, data; responseToSend});
});
}
});
}
This works by wrapping every message sent in a wrapper object that contains a unique id value. Then, when the other end sends it's response, it includes that same id value. That id value can then be matched up with a particular callback response handler for that specific message. It works both ways from client to server or server to client.
You use this by calling initRequestResponseSocket(socket, requestHandler) once on a socket.io socket connection on each end. If you wish to receive requests, then you pass a requestHandler function which gets called each time there is a request. If you are only sending requests and receiving responses, then you don't have to pass in a requestHandler on that end of the connection.
To send a message and wait for a response, you do this:
socket.sendRequestResponse(data, function(err, response) {
if (!err) {
// response is here
}
});
If you're receiving requests and sending back responses, then you do this:
initRequestResponseSocket(socket, function(data, respondCallback) {
// process the data here
// send response
respondCallback(null, yourResponseData);
});
As for error handling, you can monitor for a loss of connection and you could build a timeout into this code so that if a response doesn't arrive in a certain amount of time, then you'd get an error back.
Here's an expanded version of the above code that implements a timeout for a response that does not come within some time period:
function initRequestResponseSocket(socket, requestHandler, timeout) {
var cntr = 0;
var openResponses = {};
// send a request
socket.sendRequestResponse = function(data, fn) {
// put this data in a wrapper object that contains the request id
// save the callback function for this id
var id = cntr++;
openResponses[id] = {fn: fn};
socket.emit('requestMsg', {id: id, data: data});
if (timeout) {
openResponses[id].timer = setTimeout(function() {
delete openResponses[id];
if (fn) {
fn("timeout");
}
}, timeout);
}
}
// process a response message that comes back from a request
socket.on('responseMsg', function(wrapper) {
var id = wrapper.id, requestInfo;
if (typeof id === "number" && typeof openResponse[id] === "object") {
requestInfo = openResponses[id];
delete openResponses[id];
if (requestInfo) {
if (requestInfo.timer) {
clearTimeout(requestInfo.timer);
}
if (requestInfo.fn) {
requestInfo.fn(null, wrapper.data);
}
}
}
});
// process a requestMsg
socket.on('requestMsg', function(wrapper) {
if (requestHandler && wrapper.id) {
requestHandler(wrapper.data, function(responseToSend) {
socket.emit('responseMsg', {id: wrapper.id, data; responseToSend});
});
}
});
}
There are a couple of interesting things in your question and your design, I prefer to ignore the implementation details and look at the high level architecture.
You state that you are looking to a client that requests data and a server that responds with some stream of data. Two things to note here:
HTTP 1.1 has options to send streaming responses (Chunked transfer encoding). If your use-case is only the sending of streaming responses, this might be a better fit for you. This does not hold when you e.g. want to push messages to the client that are not responding to some sort of request (sometimes referred to as Server side events).
Websockets, contrary to HTTP, do not natively implement some sort of request-response cycle. You can use the protocol as such by implementing your own mechanism, something that e.g. the subprotocol WAMP is doing.
As you have found out, implementing your own mechanism comes with it's pitfalls, that is where HTTP has the clear advantage. Given the requirements stated in your question I would opt for the HTTP streaming method instead of implementing your own request/response mechanism.

Keep reading from Angular JS HTTP call

I am working on a web system, but fairly new to Angular JS.
The feature I am working on currently needs to do a POST call using Angular. That's not really difficult to do in Angular. But I want to keep reading from the socket.
The call I am doing takes a very long time (as intended) but I want the status of the call to be visible in my Angular app. So I'd like to send data back with the status, and this to be live visible in my app.
What's the best approach for this? I found the WebSocket demonstrations for Angular, but in that case I'd have to create my own HTTP implementation on top of WebSockets to be able to send POST requests.....
HTTP requests using the $http service take place in the background, so you can show anything you want while the request is actually being made. To get a sense of the status of the request it depends on how you'd like to measure that status.
Progress events on the $http object aren't slated to be added to Angular until 1.6, so if the delay is related to a large file upload you might need to create a service that wraps a raw XMLHttpRequest object instead of using $http. Here's an example
from the MDN docs:
var oReq = new XMLHttpRequest();
oReq.addEventListener("progress", updateProgress);
oReq.addEventListener("load", transferComplete);
oReq.addEventListener("error", transferFailed);
oReq.addEventListener("abort", transferCanceled);
// progress on transfers from the server to the client (downloads)
function updateProgress (oEvent) {
if (oEvent.lengthComputable) {
var percentComplete = oEvent.loaded / oEvent.total;
// ...
} else {
// Unable to compute progress information since the total size is unknown
}
}
If the delay is serverside - waiting for a long running database query, for example, you might want to just fake a progress meter based on the average runtime of the query or show an indeterminate bar:
It sounds like you somehow have a way to monitor the progress on the serverside. In that case, you can make another request while the first one is in progress to get the progress information. You will probably need to send some state (like a query ID or request ID) to correlate the two requests.
XHRCallToTheRestService()
.then(function(result){
//Hide status bar, data is back
});
readMyStatus()
.then(function(status){
if(status == "finished"){
//Do nothing
} else{
readMyStatus();
}
});
You can use interceptor. Whenever you do http call, this will intercept the call before send and after return from the server.
$httpProvider.interceptors.push(['$q', '$location', 'localStorageService', function ($q, $location, localStorageService) {
return {
request: function (config) {
// You can do some stuff here
return config;
},
response: function (result) {
return result;
},
responseError: function (rejection) {
console.log('Failed with', rejection.status, 'status');
return $q.reject(rejection);
}
}
}]);
Or you can use https://github.com/McNull/angular-block-ui
you could create a custom service using $HTTP to do a POST request like this:
site.factory('customService', ['$http', function($http){
var httpProto = {
success : function(response, status, headers, config) {
return response;
},
error : function(error, status, headers, config) {
return error;
}
};
thePostRequest: function(dataRequest){
var output = $http.post(
'http://74.125.224.72/',
dataRequest
);
output.prototype = httpProto;
return output;
}
}
For web sockets a friend pointed me to socket.io and I have used them successfully for Angular in several SPAs. Check this material for more info.
G00d 1uck.

A design pattern for async requests to handle success, failure, retry ? (javascript)

I'm writing a mobile app with Appcelerator Titanium that makes a lot of different xhr requests. This is not really an Appcelerator Titanium specific question. But if you do write some code, I hope it's javascript.
The app needs to authenticate itself, the user must be logged for some interactions, etc.
I've come to a point where any request might get any kind of response such as:
not authenticated
not logged
bad params
successful
...
The requests are wrapped in different model methods or helpers.
The thing is, I'm not familiar with this kind of app. I was wondering what are the best practices.
Some real questions for example would be:
If the app is not authenticated (token expired, first launch), should the app try to authenticate itself and then send again the request that was denied ? (transparent to user)
Should I send an authentication request each time the app launches and then "forget" about it?
The problem I'm facing is that the code becomes quickly big if I try to handle this for each request. Full of nested callbacks, retry conditions, various events listeners to manage, etc. It just does not feel very "nice". And it's not DRY at all, when what I really need is for any request, check what was wrong, try to fix it (authenticate if not, automatic login if possible or show the login UI, etc..) then if that works retry the original request a couple of times, abort if needed.
I've been looking at the promise pattern but only know theory and don't know if it could be what I need.
So I welcome any advice regarding this particular problem. I wonder how apps like "Facebook" handle this.
Thank you for your help
This question is not easily answered, but let me try to give you some Ideas:
The most important thing, before coding anything in your app, is the API itself. It has to be reliable and adhere to standards. I will not go into too much detail here, but a well written RESTful API can reduce the complexity of your httpClient significantly. It has to respond with standard http status codes and to methods like POST, GET, PUT, DELETE...
A pretty good read is The REST API Design Handbook by George Reese.
My approach to httpClients with Titanium is a single module, which is loaded via require() wherever needed. I stick to one single client at a time, as I had massive problems with multiple parallel calls. Whenever a call is made, the client checks if there is already a call in progress and sends it to a queue if necessary.
Let me show you an example. I have left out lots of stuff for sake of brevity:
// lib/customClient.js
var xhrRequest; // This will be our HTTPClient
var callQueue = []; // This will be our queue
// Register the request
// params are:
// method (e.g. 'GET')
// url (e.g. 'http://test.com/api/v1/user/1')
// done (callback function)
function registerRequest(params) {
if(!xhrRequest) {
sendRequest(params);
} else {
queueRequest(params);
}
}
// This simply sends the request
// to the callQueue
function queueRequest(params) {
callQueue.push(params);
}
// Send the request with the params from register
// Please note that I do not hardcode error messages,
// I just do it here so it is easier to read
function sendRequest(params) {
// Set callback if available and valid
var callback = params.done && typeof(params.done) === "function" ? params.callback : null;
// Set method
var method = params.method || 'GET';
// Create the HTTP Client
xhrRequest = Ti.Network.createHTTPClient({
// Success
onload: function() {
// You can check for status codes in detail here
// For brevity, I will just check if it is valid
if (this.status >= 200 && this.status < 300) {
if(this.responseText) {
// You might want to check if it can be parsed as JSON here
try {
var jsonData = JSON.parse(this.responseText);
if(callback) callback({ success: true, response: jsonData });
} catch(e) {
if(callback) callback({ success: false, errormessage: 'Could not parse JSON data' });
}
processQueue();
} else {
if(callback) callback({ success: false, errormessage: 'No valid response received' });
processQueue();
}
} else {
if(callback) callback({ success: false, errormessage: 'Call response is success but status is ' + this.status });
processQueue();
}
},
// Error
onerror: function(e) {
if(this.responseText) {
try {
var jsonData = JSON.parse(this.responseText);
if(callback) callback({ success: false, reponse: jsonData });
} catch(e) {};
}
processQueue();
},
});
// Prepare and send request
// A lot more can (and should) be configured here, check documentation!
xhrRequest.setTimeout(10000);
xhrRequest.open(method, params.url);
xhrRequest.send();
}
// Checks if there is anything else in the queue
// and sends it
function processQueue() {
xhrRequest = null;
var nextInQueue = callQueue.shift();
if(nextInQueue) sendRequest(nextInQueue);
}
// Our public API
var publicAPI = {
sendRequest: function(params) {
registerRequest(params);
}
};
module.exports = publicAPI;
I can then send a call from any other controller/view
var customClient = require('lib/customClient'); // omit 'lib' if you use alloy
// Send the request
customClient.sendRequest({
method : 'GET',
url : 'http://test.com/api/v1/user/1',
done : function(response) {
Ti.API.debug(JSON.stringify(response));
}
});
Note that this is not complete and does not check for connectivity, has no real error handling etc., but it might help you to get an idea.
I think there is loads of stuff to talk about here, but I will stop here for now...

Send user details (session token) within every AJAX requests (Sencha Touch 2)

i am building a Sencha Touch 2 Application with userspecific datasets.
Architecture of the App:
Sencha Touch App <=====> Java Server backend with REST Services
( many AJAX requests =) )
What i actually have is:
Login the user with username/password
The app gets initialized and the loginform comes into play. After submitting the form as a AJAX request, the server backend checks the userdata and calls the client callback function.
And what i want to do is:
The callback function should
create a cookie with the sessiontoken or
store the sessiontoken within the localstorage (http://docs.sencha.com/touch/2-0/#!/api/Ext.data.proxy.LocalStorage) or
store the sessiontoken within js variable
Okay, shouldn't be the problem.
But how can i achieve the following:
Most of the data is specific for one user and should be returned by the REST service if needed (by clicking on the navigation,...). How can i send the sessiontoken (see above) within every AJAX request - so the server can provide the suitable datasets (assuming the token is valid)?
Send cookies within AJAX requests
I have already read that cookies gets automaticly added to the request if the url is on the same space, right? The Java Server is on the same domain (localhost:8080) but the cookies aren't available - instead of requests on urls like 'app.json'. I thought that cross-domain-requests are really domain specific?
Send paramaters within AJAX requests
Because the cookies aren't avi i thought about the possiblity of 'manually' adding parameters to the ajax requests. The App will contain many AJAX requests and thats why i dont want to add the token manually - i tried to override the requests function of Ext.Ajax but i failed ;-( :
(function() {
var originalRequest = Ext.data.Connection.prototype.request;
Ext.override(Ext.data.Connection, {
request : function(options) {
alert("Do sth... like add params");
return originalRequest.apply(this, options);
}
});
})();
ERROR:
Uncaught Error: [ERROR][Ext.data.Connection#request] No URL specified
I also tried to add a listener
Ext.Ajax.add({
listeners : {
beforerequest : function( conn, options, eOpts ){
alert("Do sth... like add params");
}
}
});
ERROR:
Uncaught TypeError: Object [object Object] has no method 'add'
Any idea about how i can add the token?
Or any better way of handling these case?
Thanks!
Finally i successfully used:
function addAjaxInterceptor(context)
{
Ext.Ajax.on('beforerequest', function(conn, options, eOptions)
{
// add the param to options...
}, context);
}
Executed from the app (=> addAjaxInterceptor(this)).
But the following solution is more suitable for my situation i think:
Ext.Ajax._defaultHeaders = {
// params as json
};
(Cause Ext.Ajax is a singleton object and i dont change the params for every request)

Categories

Resources