NGINX JavaScript module with a session storage (like Redis) - javascript

I know there is a possibility to process each request via a JS script right inside the NGINX server.
I know there is the Lua Nginx module and the Lua Redis driver, and it's possible to write a script in Lua and use Redis right from the NGINX server.
However, I want to use standard functionality of NGINX and I prefer to code in JS. I wonder is it possible to use some session storage with the NJS? And how to do it? Particularly, I would like to use Redis as a session storage.

If one avoids to compile and install third-party modules for Nginx himself, I suppose the best way to build session storage with njs and Redis is to utilize the builtin ngx_http_upstream_module module and set up something like that
http {
[...]
upstream redis {
server unix:/var/run/redis/nginx.sock;
}
[...]
js_path conf.d/js/;
js_import redismiddleware.js;
[...]
server {
[...]
location /redisadapter {
internal;
try_files #dummy #redis;
}
location /request-we-are-tracking-no-1/ {
js_content redismiddleware.processRequestConditional;
}
[...]
location /request-we-are-tracking-no-2/ {
js_content redismiddleware.processRequestUnconditional;
}
[...]
}
and the corresponding script
var queryString = require('querystring')
function processRequestConditional (request) {
request.subrequest('/redisadapter', {
method : 'POST',
body : queryString.stringify({
/* data to transfer to Redis */
})
}).then(function (response) {
var reply = {}
/**
* parsing and checking feedback from Redis,
* saving some data from feedback to reply
* object, doing any things as planned,
* running any additional routines et
* cetera
*/
if (/* Redis reply is OK */) {
return reply;
} else {
throw new Error('Because we don`t like you!')
}
}).then(function (data) {
/**
* Making one more subrequest to obtain the content
* the client is waiting for
*/
request.subrequest('/secret-url-with-content').then(
(response) => request.return(response.httpStatus, response.body)
)
}).catch((error) {
/**
* Returning to client with response "There will be
* be no response"
*/
request.return(403, error.message)
})
}
function processRequestUnconditional (request) {
request.subrequest('/redisadapter', {
method : 'POST',
body : queryString.stringify({
/* data to transfer to Redis */
})
}).then(function (response) {
/**
* parsing and checking feedback from Redis,
* doing some things, running some or other
* additional routines depending on reply
*/
})
request.subrequest('/secret-url-with-content').then(
(response) => request.return(response.httpStatus, response.body)
)
}
export default { processRequestConditional, processRequestUnconditional }
Short summary
Redis is listening and replying on socket
/var/run/redis/nginx.sock
Virtual internal location /redisadapter receives the requests from njs script, transfers them to Redis and returns the replies back to njs method, which started the requests sequense
To establish data exchange with Redis on some location, we take control over Nginx standard flow and maintain these locations with custom njs methods
Thus, the code in methods in addition to Redis related information exchange routines should itself implement the complete subroutine of retrieving the requested content and delivering this content back to client, since we took this job from Nginx
This is achieved by sets of local subrequests from server to server, which are transparent for clients. njs method completes the set and itself delivers the requested content back to the client
The above example contains two methods, processRequestConditional and processRequestUnconditional, with the goal to show the most used alternative logic ways to maintain such kind of tasks
The first one, processRequestConditional, demonstrates subrequests chain - i.e., the content, requested by client, won't be obtained, until njs method is busy with its primary task transferring the next piece of data into Redis session storage. Moreover, if the script is not satisfied with Redis feedback, the request of content for client is skipped at all, and the client faces refusal message instead
The second method, processRequestUnconditional, transfers the data to Redis storage the same way as the first one above, but this time the fate of the client's request does not depend on results of Redis feedback, thus the secondary request for content is issued at the same time with primary, and flows in parallel while script continue the information exchange round with session storage
Of course, my brief explanation leaves a lot of details behind the scenes, but I hope the basic concept is now clear
Feel free to ask additional questions

Related

Send data on configuration

I want to send asynchronous data to the node on configuration. I want to
perform a SQL request to list some data in a .
On node creation, a server side function is performed
When it's done, a callback send data to the node configuration
On node configuration, when data is received, the list is created
Alternatively, the binary can request database each x minutes and create a
cache that each node will use on creation, this will remove the asynchronous
part of code, even if it's no longer "live updated".
In fact, i'm stuck because i created the query and added it as below :
module.exports = function(RED) {
"use strict";
var db = require("../bin/database")(RED);
function testNode(n) {
// Create a RED node
RED.nodes.createNode(this,n);
// Store local copies of the node configuration (as defined in the
.html
var node = this;
var context = this.context();
this.on('input', function (msg) {
node.send({payload: true});
});
}
RED.nodes.registerType("SQLTEST",testNode);
}
But I don't know how to pass data to the configuration node. I thought of
Socket.IO to do it, but, is this a good idea and is it available? Do you know any solution ?
The standard model used in Node-RED is for the node to register its own admin http endpoint that can be used to query the information it needs. You can see this in action with the Serial node.
The Serial node edit dialog lists the currently connected serial devices for you to pick from.
The node registers the admin endpoint here: https://github.com/node-red/node-red-nodes/blob/83ea35d0ddd70803d97ccf488d675d6837beeceb/io/serialport/25-serial.js#L283
RED.httpAdmin.get("/serialports", RED.auth.needsPermission('serial.read'), function(req,res) {
serialp.list(function (err, ports) {
res.json(ports);
});
});
Key points:
pick a url that is namespaced to your node type - this avoids clashes
the needsPermission middleware is there to ensure only authenticated users can access the endpoint. The permission should be of the form <node-type>.read.
Its edit dialog then queries that endpoint from here: https://github.com/node-red/node-red-nodes/blob/83ea35d0ddd70803d97ccf488d675d6837beeceb/io/serialport/25-serial.html#L240
$.getJSON('serialports',function(data) {
//... does stuff with data
});
Key points:
here the url must not begin with a /. That ensures the request is made relative to wherever the editor is being served from - you cannot assume it is being served from /.

Caching on frontend or on backend

Right now I send my requests via ajax to the backend server, which does some operations, and returns a response:
function getData() {
new Ajax().getResponse()
.then(function (response) {
// handle response
})
.catch(function (error) {
// handle error
});
}
The thing is that each time a user refreshes the website, every request is sent again. I've been thinking about caching them inside the local storage:
function getData() {
if (Cache.get('getResponse')) {
let response = Cache.get('getResponse');
// handle response
return true;
}
new Ajax().getResponse()
.then(function (response) {
// handle response
})
.catch(function (error) {
// handle error
});
}
This way if a user already made a request, and the response is cached inside the localStorage, I don't have to fetch data from the server. If a user changes values from the getResponse, I would just clear the cache.
Is this a good approach? If it is, is there a better way to do this? Also, should I cache backend responses the same way? What's the difference between frontend and backend caching?
Is this a good approach? It depends on what kind of data you are storing
Be aware that everything stored on frontend can be changed by the user so this is potential security vulnerability.
This is the main difference between backend and frontend caching, backend caching can't be edited by the user.
If you decide to do frontend caching here is a code how to do it:
localStorage.setItem('getResponse', JSON.stringify(response));
For retrieving stored data from local storage
var retrievedObject = localStorage.getItem('getResponse');
NOTE:
I assume that you are storing object not a string or integer . If you are storing a string, integer, float... Just remove JSON.stringify
The best practice is to use The Cache API "a system for storing and retrieving network requests and their corresponding responses".
The Cache API is available in all modern browsers. It is exposed via the global caches property, so you can test for the presence of the API with a simple feature detection:

Parse Server Node.js SDK: Alternative to Parse.User.become?

I want to completely dissociate my client app from Parse server, to ease the switch to other Baas/custom backend in the future. As such, all client request will point to a node.js server who will make the request to Parse on behalf of the user.
Client <--> Node.js Server <--> Parse Server
As such, I need the node.js server to be able to switch between users so I can keep the context of their authentification.
I know how to authentificate, then keep the sessionToken of the user, and I ve seen during my research than the "accepted" solution to this problem was to call Parse.User.disableUnsafeCurrentUser, then using Parse.User.become() to switch the current user to the one making a request.
But that feels hackish, and I m pretty sure it will, sooner or later, lead to a race condition where the current user is switched before the request is made to Parse.
Another solution I found was to not care about Parse.User, and use the masterKey to save everything by the server, but that would make the server responsible of the ACL.
Is there a way to make request from different user other than thoses two?
Any request to the backend (query.find(), object.save(), etc) takes an optional options parameter as the final argument. This lets you specify extra permissions levels, such as forcing the master key or using a specific session token.
If you have the session token, your server code can make a request on behalf of that user, preserving ACL permissions.
Let's assume you have a table of Item objects, where we rely on ACLs to ensure that a user can only retrieve his own Items. The following code would use an explicit session token and only return the Items the user can see:
// fetch items visible to the user associate with `token`
fetchItems(token) {
new Parse.Query('Item')
.find({ sessionToken: token })
.then((results) => {
// do something with the items
});
}
become() was really designed for the Parse Cloud Code environment, where each request lives in a sandbox, and you can rely on a global current user for each request. It doesn't really make sense in a Node.js app, and we'll probably deprecate it.
I recently wrote a NodeJS application and had the same problem. I found that the combination of Parse.User.disableUnsafeCurrentUser and Parse.User.become() was not only hackish, but also caused several other problems I wasn't able to anticipate.
So here's what I did: I used
Parse.Cloud.useMasterKey(); and then loaded the current user by session ID as if it was a regular user object. It looked something like this:
module.exports = function(req, res, next) {
var Parse = req.app.locals.parse, query;
res.locals.parse = Parse;
if (req.session.userid === undefined) {
res.locals.user = undefined;
return next();
}
Parse.Cloud.useMasterKey();
query = new Parse.Query(Parse.User);
query.equalTo("objectId", req.session.userid);
query.first().then(function(result) {
res.locals.user = result;
return next();
}, function(err) {
res.locals.user = undefined;
console.error("error recovering user " + req.session.userid);
return next();
});
};
This code can obviously be optimized, but you can see the general idea. Upside: It works! Downside: No more use of Parse.User.current(), and the need to take special care in the backend that no conditions occur where someone overwrites data without permission.

meteor - does having publication rules in lib folder present a security risk?

I am following a meteor tutorial on eventedmind. We put the todos collection information in lib/collections/todos.js. The app was generated with iron.
When I load the app in the browser I can plainly see the folder under sources. It looks like:
Todos = new Mongo.Collection('todos');
// if server define security rules
// server code and code inside methods are not affected by allow and deny
// these rules only apply when insert, update, and remove are called from untrusted client code
if (Meteor.isServer) {
// first argument is id of logged in user. (null if not logged in)
Todos.allow({
// can do anythin if you own the document
insert: function (userId, doc) {
return userId === doc.userId;
},
update: function (userId, doc, fieldNames, modifier) {
return userId === doc.userId;
},
remove: function (userId, doc) {
return userId === doc.userId;
}
});
// The deny method lets you selectively override your allow rules
// every deny callback must return false for the database change to happen
Todos.deny({
insert: function (userId, doc) {
return false;
},
update: function (userId, doc, fieldNames, modifier) {
return false;
},
remove: function (userId, doc) {
return false;
}
});
}
My question is does this propose a security threat? If a javascript file is stored in the lib directory can it be hijacked by the client?
Never ever ever ever use Meteor.isServer. Instead put your server methods under /server. Code there is not served up to the client.
I disagree with redress that moving inserts to the server is more secure than doing inserts on the client. After all, a logged-in user can simply open the console and type Meteor.call('addPost', new_post_fields) (after having inspected a document in postsCollection to reverse engineer the schema) and the server will happily execute that. The cost of doing the insert via a method call is that you lose latency compensation, one of the major benefits of Meteor. Your application will feel laggy because your inserts all require a server round-trip to reflect in the UI.
Let me be more specific in response to redress' comments:
If your method code is in /lib it will be visible to both the client and the server and will run once in each place, with latency compensation. It can be modified on the client but not on the server. But it can be seen.
If your method code is in /server it will only be visible to the server and will run without latency compensation.
Meteor.call() can be invoked from the console in the client with any arguments. Your method code needs to protect against such attacks.
I recommend using a combination of allow/deny rules on the server along with aldeed:simple-schema at a minimum to control what goes into your collections.

REST service cache strategy with AngularJS

I have an AngularJS application and I want to cache the REST service responses. I found some libraries like angular-cached-resource which can do this by storing the data into the local storage of the web browser.
But sometimes I do some POST / PUT / DELETE REST calls and then some of the REST previously cached service responses need to be performed again. So it seems that it is possible to delete the cached responses then and the call will be sent to the server next time.
But what about if the server sends me in HTTP Header some values like the expires or the etag? I have to read the HTTP Header and react by myself or is there a library in AngularJS which can also handle this?
So if I should hit the server and not read the cache of the local storage is dependent on the HTTP Header Cache fields and if there are any PUT / POST / DELETE calls which have the response that for example "reload of every user settings element" are needed. So I have to take this response and create a map which tells me that for example REST services A, C and F (user settings related stuff) needs to hit the server again next time when they are executed or if the Cache expires from the HTTP Headers.
Is this possible with an AngularJS library or do you have any other recommendations? I think this is similar to Observer or PubSub Pattern, isn't it?
One more thing: Is it also possible to have something like PubSub without using a cache / local storage (so also no HTTP Header Cache controls)? So I can not call the REST service, because then it would hit the server, which I do not want in some circumstances (response from a previous REST call which returns me the event "reload of every user settings element").
You can try something like this.
app.factory('requestService', ['$http', function ($http) {
var data = {};
var service = {
getCall : funtion(requstUrl, successCallback, failureCallback, getFromCache){
if(!getFromCache){
$http.get(requstUrl)
.success(function(data){
successCallback(data);
data.requstUrl = data;
})
.error(function(){
failureCallback(data);
})
}else{
successCallback(data.requstUrl);
}
},
postCall : function(requestUrl, paramToPass, successCallback, failureCallback, getFromCache){
if(!getFromCache){
$http.post(requestUrl, paramToPass)
.success(function(data){
successCallback(data);
data.requstUrl = data;
})
.error(function(data){
failureCallback(data);
})
}else{
successCallback(data.requstUrl);
}
}
};
return service;
}]);
This is just a simple code I wrote to implement your concept. I haven't tested it and is all yours.

Categories

Resources