I have a question regarding express (connect) middleware.
What i'm trying to do is downloading DoubleClick Bid Manager Reports, parse and process them into my own MongoDB database.
My route looks as following:
app.route('/v1/spends/')
.get(dbmPolicy.isAllowed, buckets.read, buckets.check, reports.create, buckets.process, reports.update);
Where buckets.read reads files from Google Cloud Storage, buckets.check checks if report has already been processed into MongoDB, reports.create creates the report that holds the metadata of the csv. buckets.process processes the data that resides inside of the csv and reports.update updates the previously created report if all went succesfull.
As I find it very difficult to test the above process, I'm starting to doubt whether this is the correct way to implement the chain of processes. If this is the correct way, how do I test each middleware function individually on it's behaviour?
Regards,
You may want to look into the Async package and especially the waterfall method. That way you can run something like:
app.get('/v1/spends', function(req, res) {
async.waterfall([
dbmPolicy.isAllowed,
buckets.read,
buckets.check,
reports.create,
buckets.process,
reports.update
], function (err, result) {
if (err) res.status(500).send(err);
res.status(200).send(result);
});
});
Related
Say I have this endpoint on an express server:
app.get('/', async (req, res) => {
var foo = await databaseGetFoo();
if (foo == true) {
foo = false;
somethingThatShouldOnlyBeDoneOnce();
await databaseSetFoo(foo);
}
})
I think this creates a race condition if the endpoint is called twice simultaneously?
If so how can I prevent this race condition from happening?
OK, so based on the comments, I've got a little better understanding of what you want here.
Assuming that somethingThatShouldOnlyBeDoneOnce is doing something asynchronous (like writing to a database), you are correct that a user (or users) making multiple calls to that endpoint will potentially cause that operation to happen repeatedly.
Using your comment about allowing a single comment per user, and assuming you've got middleware earlier in the middleware stack that can uniquely identify a user by session or something, you could naively implement something like this that should keep you out of trouble (usual disclosures that this is untested, etc.):
let processingMap = {};
app.get('/', async (req, res, next) => {
if (!processingMap[req.user.userId]) {
// add the user to the processing map
processingMap = {
...processingMap,
[req.user.userId]: true
};
const hasUserAlreadySubmittedComment = await queryDBForCommentByUser(req.user.userId);
if (!hasUserAlreadySubmittedComment) {
// we now know we're the only comment in process
// and the user hasn't previously submitted a comment,
// so submit it now:
await writeCommentToDB();
delete processingMap[req.user.userId];
res.send('Nice, comment submitted');
} else {
delete processingMap[req.user.userId];
const err = new Error('Sorry, only one comment per user');
err.statusCode = 400;
next(err)
}
} else {
delete processingMap[req.user.userId];
const err = new Error('Request already in process for this user');
err.statusCode = 400;
next(err);
}
})
Since insertion into the processingMap is all synchronous, and Node can only be doing one thing at a time, the first request for a user to hit this route handler will essentially lock for that user until the lock is removed when we're finished handling the request.
BUT... this is a naive solution and it breaks the rules for a 12 factor app. Specifically, rule 6, which is that your applications should be stateless processes. We've now introduced state into your application.
If you're sure you'll only ever run this as a single process, you're fine. However, the second you go to scale horizontally by deploying multiple nodes (via whatever method--PM2, Node's process.cluster, Docker, K8s, etc.), you're hosed with the above solution. Node Server 1 has no idea about the local state of Node Server 2 and so multiple requests hitting different instances of your multi-node application can't co-manage the state of the processing map.
The more robust solution would be to implement some kind of queue system, likely leveraging a separate piece of infrastructure like Redis. That way all of your nodes could use the same Redis instance to share state and now you can scale up to many, many instances of your application and all of them can share info.
I don't really have all the details on exactly how to go about building that out and it seems out of scope for this question anyway, but hopefully I've given you at least one solution and some idea of what to think about at a broader level.
I have a Node.js server which queries MySQL database. It serves as an api end point where it returns JSON and also backend server for my Express application where it returns the retrieved list as an object to the view.
I am looking into implementing flat-cache for increasing the response time. Below is the code snippet.
const flatCache = require('flat-cache');
var cache = flatCache.load('productsCache');
//get all products for the given customer id
router.get('/all/:customer_id', flatCacheMiddleware, function(req, res){
var customerId = req.params.customer_id;
//implemented custom handler for querying
queryHandler.queryRecordsWithParam('select * from products where idCustomers = ? order by CreatedDateTime DESC', customerId, function(err, rows){
if(err) {
res.status(500).send(err.message);
return;
}
res.status(200).send(rows);
});
});
//caching middleware
function flatCacheMiddleware(req, res, next) {
var key = '__express__' + req.originalUrl || req.url;
var cacheContent = cache.getKey(key);
if(cacheContent){
res.send(cacheContent);
} else{
res.sendResponse = res.send;
res.send = (body) => {
cache.setKey(key,body);
cache.save();
res.sendResponse(body)
}
next();
}
}
I ran the node.js server locally and the caching has indeed greatly reduced the response time.
However there are two issues I am facing that I need your help with.
Before putting that flatCacheMiddleware middleware, I received the response in JSON, now when I test, it sends me an HTML. I am not too well versed with JS strict mode (planning to learn it soon), but I am sure the answer lies in the flatCacheMiddleware function.
So what do I modify in the flatCacheMiddleware function so it would send me JSON?
I manually added a new row to the products table for that customer and when I called the end point, it still showed me the old rows. So at what point do I clear the cache?
In a web app it would ideally be when the user logs out, but if I am using this as an api endpoint (or even on webapp there is no guarantee that the user will log out the traditional way), how do I determine if new records have been added and the cache needs to be cleared.
Appreciate the help. If there are any other node.js caching related suggestions you all can give, it would be truly helpful.
I found a solution to the issue by parsing the content to JSON format.
Change line:
res.send(cacheContent);
To:
res.send(JSON.parse(cacheContent));
I created cache 'brute force' invalidation method. Calling clear method will clear both cache file and data stored in memory. You have to call it after db change. You can also try delete specified key using cache.removeKey('key');.
function clear(req, res, next) {
try {
cache.destroy()
} catch (err) {
logger.error(`cache invalidation error ${JSON.stringify(err)}`);
res.status(500).json({
'message' : 'cache invalidation error',
'error' : JSON.stringify(err)
});
} finally {
res.status(200).json({'message' : 'cache invalidated'})
}
}
Notice, that calling the cache.save() function will remove other cached API function. Change it into cache.save(true) will 'prevent the removal of non visited keys' (like mentioned in comment in the flat-cache documentation.
I have this express application with mongoDB as the database and handlebars as my server-side templating engine. I am not using AngularJS or Ajax in my application.
In one of the routes, I have to render the page as well as send over a json file from the database. However, I am not able to achieve this.
Here is code snippet the my route:
router.get('/disks', function(req, res, next) {
places.find({"category": "disks"}, function(err, disks){
if(err){
throw err;
}
res.render('disks',
{
'risime': JSON.stringify(disks)
});
console.log(disks); // PROPERLY LOGS TO THE CONSOLE
});
});
In the hbs, I am trying to capture it, but I don't even think that it is JSON.
Here is how it gets logged in the client side:
[{"_id":"5704630a7d4cd367f8dsdce7","name":"Seagate",:"This awesome Hard disk",","categories":["SDD","256GB"]}]
What is the issue and how do I resolve it?
It's handlebars that "html escapes" your string (which is what you normally want).
if you don't want that, you can use the "triple-stash" notation, like this:
{{{risime}}}
You can read about this here: http://handlebarsjs.com/#html-escaping
I think you need to add this before render:
res.type('application/json');
The client will know this is a JSON, not a HTML or a plain text and it will be shown correctly.
I hope my answer will help you.
I've got some data from a JSON file, which I use in my HTML getting it first from AngularJS like this:
$http.get('js/data.json').success(function(data) {
$scope.data = data;
});
And I want to update this JSON file after clicking a button in the HTML:
<button ng-click="postData(id)">Post</button>
You cannot write on files via JavaScript only (AngularJS).
You are to go via server side and point your "post" request to a server side script (i.e: PHP) and make that script do the job.
This sort of thing won't work. The file you are trying to write to would be on a server; and as it is right now, it would be a static resource. I'd suggest reading up on Angular resources, here. You can set up your server-side code to perform CRUD operations on the json file, but an actually database would be best. If you prefer to use a json format, Mongodb is your best choice; here is a link to Mongodb University, which offers free courses. I've done it in the past, and it's been great.
Now, for some actually help in your situation:
You can perform a GET request on your json file because it's seen as a static resource. The POST request, however, needs server-side scripting to do anything.
$http.get('api/YOUR_RESOURCE').success(function(data) {
$scope.database = data;
});
$http.post('api/YOUR_RESOURCE', {
data_key: data_value,
data_key2: data_value2
}).success(function(data) {
data[id].available = false;
});
This may be further ahead on your path to learning Angular, but here is a snippet of Node.js server code, with a Mongo database and Mongoose to handle the 'Schema', to help you get an idea of how this works:
var mongoose = require('mongoose'),
YOUR_RESOURCE = mongoose.model('YOUR_RESOURCE');
app.route('/api/YOUR_RESOURCE')
// This should be your GET request; 'api/
.get(
// Get all docs in resource
YOUR_RESOURCE.find().exec(function (err, data) {
if (err) {
return res.status(400).send({
message: SOME_ERROR_HANDLER
});
} else {
res.json(data); // return list of all docs found
}
});)
// Add new doc to database
.post(function (req, res) {
// The keys of the object sent from your Angular app should match
// those of the model
var your_resource = new YOUR_RESOURCE(req.body);
your_resource.save(function (err) {
if (err) {
return res.status(400).send({
message: SOME_ERROR_HANDLER
});
} else {
// returns newly created doc to Angular after successful save
res.json(your_resource);
}
});
);
Here is an SO page with a list of resources on getting started with Node; I recommend Node because of it's ease of use and the fact that it is written in JS. The Mongo University lessons also go through setting up you server for use with the database; you can choose between several flavors, such as Java, .NET, Python or Node.
There is a bit left out in the examples above, such as the Mongoose model and Node setup, but those will be covered in the resources I've linked to on the page, if you choose to read them. Hope this helps :)
I am using the Inbox module for node to process incoming mail with the following function call:
client.listMessages(-1, function(err, messages){
messages.forEach(function(message){
client.createMessageStream(message.UID)
.pipe(process.stdout, {end: false});
});
});
This logs the mail to console with 'process.stdout', however I want to save the result to mongo, or do other javascript stuff, how can i do it?
It would appear that createMessageStream is returning a stream interface object. The methods to access the data are shown in that link.
As saving the data into MongoDB there is the basic driver or modules such as Mongoose that can provide you with methods to do this.