Right now I send my requests via ajax to the backend server, which does some operations, and returns a response:
function getData() {
new Ajax().getResponse()
.then(function (response) {
// handle response
})
.catch(function (error) {
// handle error
});
}
The thing is that each time a user refreshes the website, every request is sent again. I've been thinking about caching them inside the local storage:
function getData() {
if (Cache.get('getResponse')) {
let response = Cache.get('getResponse');
// handle response
return true;
}
new Ajax().getResponse()
.then(function (response) {
// handle response
})
.catch(function (error) {
// handle error
});
}
This way if a user already made a request, and the response is cached inside the localStorage, I don't have to fetch data from the server. If a user changes values from the getResponse, I would just clear the cache.
Is this a good approach? If it is, is there a better way to do this? Also, should I cache backend responses the same way? What's the difference between frontend and backend caching?
Is this a good approach? It depends on what kind of data you are storing
Be aware that everything stored on frontend can be changed by the user so this is potential security vulnerability.
This is the main difference between backend and frontend caching, backend caching can't be edited by the user.
If you decide to do frontend caching here is a code how to do it:
localStorage.setItem('getResponse', JSON.stringify(response));
For retrieving stored data from local storage
var retrievedObject = localStorage.getItem('getResponse');
NOTE:
I assume that you are storing object not a string or integer . If you are storing a string, integer, float... Just remove JSON.stringify
The best practice is to use The Cache API "a system for storing and retrieving network requests and their corresponding responses".
The Cache API is available in all modern browsers. It is exposed via the global caches property, so you can test for the presence of the API with a simple feature detection:
Related
I know there is a possibility to process each request via a JS script right inside the NGINX server.
I know there is the Lua Nginx module and the Lua Redis driver, and it's possible to write a script in Lua and use Redis right from the NGINX server.
However, I want to use standard functionality of NGINX and I prefer to code in JS. I wonder is it possible to use some session storage with the NJS? And how to do it? Particularly, I would like to use Redis as a session storage.
If one avoids to compile and install third-party modules for Nginx himself, I suppose the best way to build session storage with njs and Redis is to utilize the builtin ngx_http_upstream_module module and set up something like that
http {
[...]
upstream redis {
server unix:/var/run/redis/nginx.sock;
}
[...]
js_path conf.d/js/;
js_import redismiddleware.js;
[...]
server {
[...]
location /redisadapter {
internal;
try_files #dummy #redis;
}
location /request-we-are-tracking-no-1/ {
js_content redismiddleware.processRequestConditional;
}
[...]
location /request-we-are-tracking-no-2/ {
js_content redismiddleware.processRequestUnconditional;
}
[...]
}
and the corresponding script
var queryString = require('querystring')
function processRequestConditional (request) {
request.subrequest('/redisadapter', {
method : 'POST',
body : queryString.stringify({
/* data to transfer to Redis */
})
}).then(function (response) {
var reply = {}
/**
* parsing and checking feedback from Redis,
* saving some data from feedback to reply
* object, doing any things as planned,
* running any additional routines et
* cetera
*/
if (/* Redis reply is OK */) {
return reply;
} else {
throw new Error('Because we don`t like you!')
}
}).then(function (data) {
/**
* Making one more subrequest to obtain the content
* the client is waiting for
*/
request.subrequest('/secret-url-with-content').then(
(response) => request.return(response.httpStatus, response.body)
)
}).catch((error) {
/**
* Returning to client with response "There will be
* be no response"
*/
request.return(403, error.message)
})
}
function processRequestUnconditional (request) {
request.subrequest('/redisadapter', {
method : 'POST',
body : queryString.stringify({
/* data to transfer to Redis */
})
}).then(function (response) {
/**
* parsing and checking feedback from Redis,
* doing some things, running some or other
* additional routines depending on reply
*/
})
request.subrequest('/secret-url-with-content').then(
(response) => request.return(response.httpStatus, response.body)
)
}
export default { processRequestConditional, processRequestUnconditional }
Short summary
Redis is listening and replying on socket
/var/run/redis/nginx.sock
Virtual internal location /redisadapter receives the requests from njs script, transfers them to Redis and returns the replies back to njs method, which started the requests sequense
To establish data exchange with Redis on some location, we take control over Nginx standard flow and maintain these locations with custom njs methods
Thus, the code in methods in addition to Redis related information exchange routines should itself implement the complete subroutine of retrieving the requested content and delivering this content back to client, since we took this job from Nginx
This is achieved by sets of local subrequests from server to server, which are transparent for clients. njs method completes the set and itself delivers the requested content back to the client
The above example contains two methods, processRequestConditional and processRequestUnconditional, with the goal to show the most used alternative logic ways to maintain such kind of tasks
The first one, processRequestConditional, demonstrates subrequests chain - i.e., the content, requested by client, won't be obtained, until njs method is busy with its primary task transferring the next piece of data into Redis session storage. Moreover, if the script is not satisfied with Redis feedback, the request of content for client is skipped at all, and the client faces refusal message instead
The second method, processRequestUnconditional, transfers the data to Redis storage the same way as the first one above, but this time the fate of the client's request does not depend on results of Redis feedback, thus the secondary request for content is issued at the same time with primary, and flows in parallel while script continue the information exchange round with session storage
Of course, my brief explanation leaves a lot of details behind the scenes, but I hope the basic concept is now clear
Feel free to ask additional questions
I'm currently considering adding service workers to a Web app I'm building.
This app is, essentially, a collection manager. You can CRUD items of various types and they are usually tightly linked together (e.g. A hasMany B hasMany C).
sw-toolbox offers a toolbox.fastest handler which goes to the cache and then to the network (in 99% of the cases, cache will be faster), updating the cache in the background. What I'm wondering is how you can be notified that there's a new version of the page available. My intent is to show the cached version and, then, if the network fetch got a newer version, to suggest to the user to refresh the page in order to see the latest edits. I saw something in a YouTube video a while ago but the presenter gives no clue of how to deal with this.
Is that possible? Is there some event handler or promise that I could bind to the request so that I know when the newer version is retrieved? I would then post a message to the page to show a notification.
If not, I know I can use toolbox.networkFirst along with a reasonable timeout to make the pages available even on Lie-Fi, but it's not as good.
I just stumbled accross the Mozilla Service Worker Cookbooj, which includes more or less what I wanted: https://serviceworke.rs/strategy-cache-update-and-refresh.html
Here are the relevant parts (not my code: copied here for convenience).
Fetch methods for the worker
// On fetch, use cache but update the entry with the latest contents from the server.
self.addEventListener('fetch', function(evt) {
console.log('The service worker is serving the asset.');
// You can use respondWith() to answer ASAP…
evt.respondWith(fromCache(evt.request));
// ...and waitUntil() to prevent the worker to be killed until the cache is updated.
evt.waitUntil(
update(evt.request)
// Finally, send a message to the client to inform it about the resource is up to date.
.then(refresh)
);
});
// Open the cache where the assets were stored and search for the requested resource. Notice that in case of no matching, the promise still resolves but it does with undefined as value.
function fromCache(request) {
return caches.open(CACHE).then(function (cache) {
return cache.match(request);
});
}
// Update consists in opening the cache, performing a network request and storing the new response data.
function update(request) {
return caches.open(CACHE).then(function (cache) {
return fetch(request).then(function (response) {
return cache.put(request, response.clone()).then(function () {
return response;
});
});
});
}
// Sends a message to the clients.
function refresh(response) {
return self.clients.matchAll().then(function (clients) {
clients.forEach(function (client) {
// Encode which resource has been updated. By including the ETag the client can check if the content has changed.
var message = {
type: 'refresh',
url: response.url,
// Notice not all servers return the ETag header. If this is not provided you should use other cache headers or rely on your own means to check if the content has changed.
eTag: response.headers.get('ETag')
};
// Tell the client about the update.
client.postMessage(JSON.stringify(message));
});
});
}
Handling of the "resource was updated" message
navigator.serviceWorker.onmessage = function (evt) {
var message = JSON.parse(evt.data);
var isRefresh = message.type === 'refresh';
var isAsset = message.url.includes('asset');
var lastETag = localStorage.currentETag;
// ETag header usually contains the hash of the resource so it is a very effective way of check for fresh content.
var isNew = lastETag !== message.eTag;
if (isRefresh && isAsset && isNew) {
// Escape the first time (when there is no ETag yet)
if (lastETag) {
// Inform the user about the update.
notice.hidden = false;
}
//For teaching purposes, although this information is in the offline cache and it could be retrieved from the service worker, keeping track of the header in the localStorage keeps the implementation simple.
localStorage.currentETag = message.eTag;
}
};
I want to completely dissociate my client app from Parse server, to ease the switch to other Baas/custom backend in the future. As such, all client request will point to a node.js server who will make the request to Parse on behalf of the user.
Client <--> Node.js Server <--> Parse Server
As such, I need the node.js server to be able to switch between users so I can keep the context of their authentification.
I know how to authentificate, then keep the sessionToken of the user, and I ve seen during my research than the "accepted" solution to this problem was to call Parse.User.disableUnsafeCurrentUser, then using Parse.User.become() to switch the current user to the one making a request.
But that feels hackish, and I m pretty sure it will, sooner or later, lead to a race condition where the current user is switched before the request is made to Parse.
Another solution I found was to not care about Parse.User, and use the masterKey to save everything by the server, but that would make the server responsible of the ACL.
Is there a way to make request from different user other than thoses two?
Any request to the backend (query.find(), object.save(), etc) takes an optional options parameter as the final argument. This lets you specify extra permissions levels, such as forcing the master key or using a specific session token.
If you have the session token, your server code can make a request on behalf of that user, preserving ACL permissions.
Let's assume you have a table of Item objects, where we rely on ACLs to ensure that a user can only retrieve his own Items. The following code would use an explicit session token and only return the Items the user can see:
// fetch items visible to the user associate with `token`
fetchItems(token) {
new Parse.Query('Item')
.find({ sessionToken: token })
.then((results) => {
// do something with the items
});
}
become() was really designed for the Parse Cloud Code environment, where each request lives in a sandbox, and you can rely on a global current user for each request. It doesn't really make sense in a Node.js app, and we'll probably deprecate it.
I recently wrote a NodeJS application and had the same problem. I found that the combination of Parse.User.disableUnsafeCurrentUser and Parse.User.become() was not only hackish, but also caused several other problems I wasn't able to anticipate.
So here's what I did: I used
Parse.Cloud.useMasterKey(); and then loaded the current user by session ID as if it was a regular user object. It looked something like this:
module.exports = function(req, res, next) {
var Parse = req.app.locals.parse, query;
res.locals.parse = Parse;
if (req.session.userid === undefined) {
res.locals.user = undefined;
return next();
}
Parse.Cloud.useMasterKey();
query = new Parse.Query(Parse.User);
query.equalTo("objectId", req.session.userid);
query.first().then(function(result) {
res.locals.user = result;
return next();
}, function(err) {
res.locals.user = undefined;
console.error("error recovering user " + req.session.userid);
return next();
});
};
This code can obviously be optimized, but you can see the general idea. Upside: It works! Downside: No more use of Parse.User.current(), and the need to take special care in the backend that no conditions occur where someone overwrites data without permission.
Here is the problem I am working on.
The client needs to poll the node server for some data using an API. For the node server to respond, it needs a data set (to be read from DB). I want to avoid reading the database in every poll. How do I maintain the data set across multiple polls?
And if this is possible, would it have any impact on server performance.
db query reduction is always a complicated question. my solution is making db query a promise and cache it till it is expired.
For example:
var cache = {};
var cachedQuery = function(id) {
if(id in cache) return cache[id];
return cache[id] = new Promise(function(resolve, reject) {
db.query('select * from test where id=?', id, function(err, rows) {
delete cache[id];
if(err) reject(err);
else resolve(rows);
}
});
}
Assume that you have 100+ queries at the same time that shares the same request id, those queries will share the same db request. The request is established when the first query comes, and all queries will return the same result when the query completes.
For a more generic usage, you can use a lru-cache to store the promises created, giving it a expriation time.
You can use an in-memory cache. The idea is that before making a trip to the database, you check if the requested value exists in the cache. If it is - serve it. If not - fetch from database, save to cache and return it.
Serving from memory is the fastest you can get. There are existing solutions for node out there, like node-cache.
I have an AngularJS application and I want to cache the REST service responses. I found some libraries like angular-cached-resource which can do this by storing the data into the local storage of the web browser.
But sometimes I do some POST / PUT / DELETE REST calls and then some of the REST previously cached service responses need to be performed again. So it seems that it is possible to delete the cached responses then and the call will be sent to the server next time.
But what about if the server sends me in HTTP Header some values like the expires or the etag? I have to read the HTTP Header and react by myself or is there a library in AngularJS which can also handle this?
So if I should hit the server and not read the cache of the local storage is dependent on the HTTP Header Cache fields and if there are any PUT / POST / DELETE calls which have the response that for example "reload of every user settings element" are needed. So I have to take this response and create a map which tells me that for example REST services A, C and F (user settings related stuff) needs to hit the server again next time when they are executed or if the Cache expires from the HTTP Headers.
Is this possible with an AngularJS library or do you have any other recommendations? I think this is similar to Observer or PubSub Pattern, isn't it?
One more thing: Is it also possible to have something like PubSub without using a cache / local storage (so also no HTTP Header Cache controls)? So I can not call the REST service, because then it would hit the server, which I do not want in some circumstances (response from a previous REST call which returns me the event "reload of every user settings element").
You can try something like this.
app.factory('requestService', ['$http', function ($http) {
var data = {};
var service = {
getCall : funtion(requstUrl, successCallback, failureCallback, getFromCache){
if(!getFromCache){
$http.get(requstUrl)
.success(function(data){
successCallback(data);
data.requstUrl = data;
})
.error(function(){
failureCallback(data);
})
}else{
successCallback(data.requstUrl);
}
},
postCall : function(requestUrl, paramToPass, successCallback, failureCallback, getFromCache){
if(!getFromCache){
$http.post(requestUrl, paramToPass)
.success(function(data){
successCallback(data);
data.requstUrl = data;
})
.error(function(data){
failureCallback(data);
})
}else{
successCallback(data.requstUrl);
}
}
};
return service;
}]);
This is just a simple code I wrote to implement your concept. I haven't tested it and is all yours.