I have the following code in my Node/Express project
exports.getProductById = function(req, res){
db.get('product', req.params.id).then(function(res2){
res.render('product/get', {title:'Product', product:res2.body, categories: db.categories});
})
.fail(function (err) {
res.redirect('/');
})
}
This works but it seems like it could be a lot better, however, my lack of Javascript experience seems to be an issue. I would envision something like this...
var callback = function(res2){
res.render('product/get', {title:'Product', product:res2.body, categories: db.categories});
}
var errorCallback = function (err) {
res.redirect('/');
}
exports.getProductById = function(req, res){
db.get('product', req.params.id).then(callback)
.fail(errorCallback)
}
Of course the problem here is I have no idea how to pass the res object to the Callback. What is the best pattern for handling this type of scenario?
Hey I wrote the orchestrate.io client library for JavaScript, and your first run through is actually good. I think you'll find it easier to keep req and res within the getProductById function. If you wish to avoid clutter, you can write additional functions that handle the data, format it, etc. and return back those changes to getProductById. This will make it easier to keep track of what is going on.
Related
I am really new to JS, and even newer to node.js. So using "traditional" programming paradigms my file looks like this:
var d = require('babyparse');
var fs = require('fs');
var file = fs.readFile('SkuDetail.txt');
d.parse(file);
So this has many problems:
It's not asynchronous
My file is bigger than the default max file size (this one is about 60mb) so it currently breaks (not 100% sure if that's the reason).
My question: how do I load a big file (and this will be significantly bigger than 60mb for future uses) asynchronously, parsing as I get information. Then as a followup, how do I know when everything is completed?
You should create a ReadStream. A common pattern looks like this. You can parse data as it gets available on the data event.
function readFile(filePath, done) {
var
stream = fs.createReadStream(filePath),
out = '';
// Make done optional
done = done || function(err) { if(err) throw err; };
stream.on('data', function(data) {
// Parse data
out += data;
});
stream.on('end', function(){
done(null, out); // All data is read
});
stream.on('error', function(err) {
done(err);
});
}
You can use the method like:
readFile('SkuDetail.txt', function(err, out) {
// Handle error
if(err) throw err;
// File has been read and parsed
}
If you add the parsed data to the out variable the entire parsed file will be sent to the done callback.
It already is asynchronous, javascript is asynchronous no extra effort is needed from your part. Does your code even work though? I think your parse should be inside a callback of read. Otherwise readfile is skipped and file is null.
In normal situations any io code you write will be "skipped" and the code after it which may be more direct will be executed first.
For the first question since you want to process chunks, Streams might be what you are looking for. #pstenstrm has an example in his answer.
Also, you can check this Node.js documentation link for Streams: https://nodejs.org/api/fs.html#fs_fs_createreadstream_path_options
If you want an brief description and example for Streams check this link: http://www.sitepoint.com/basics-node-js-streams/
You can pass a callback to the fs.readFile function to process the content once the file read is complete. This would answer your second question.
fs.readFile('SkuDetail.txt', function(err, data){
if(err){
throw err;
}
processFile(data);
});
You can see Get data from fs.readFile for more details.
Also, you could use Promises for cleaner code with other added benefits. Check this link: http://promise-nuggets.github.io/articles/03-power-of-then-sync-processing.html
I see several recommendations for adhering to an 80 character max line length when writing javascript, e.g. Google, npm, Node.js, Crockford. In certain cases, however, I don't see how best to do it. Take the example below
MongoClient.connect('mongodb://localhost:27017/sampleDatabase', function(err, database) {
if(err) {
throw err;
}
db = database;
});
That would throw a jshint warning since it exceeds 80 characters. Now, would you choose to ignore the warning in this instance, or instead opt for a solution such as
MongoClient.connect('mongodb://localhost:27017/sampleDatabase',
function(err, database) {
if(err) {
throw err;
}
db = database;
}
);
If you can reuse the url variable, Andy's is a great option. If it's a one shot, as calls like this often are, I'd probably do something like this...
/*jslint sloppy:true, white:true, browser: true, maxlen:80 */
/*global MongoClient */
var dbErrHand, db;
dbErrHand = function(err, database) {
if(err) {
throw err;
}
db = database; // Killing me with the global spaghetti! ;^)
};
MongoClient.connect(
'mongodb://localhost:27017/sampleDatabase',
dbErrHand
);
That way, your code is more expressive and you know what db you're connecting to, though Andy just needs to change var url to var mongoSampleDb or similar to get the same advantage.
I like to pull the functions out so you can visually understand that they're reasonably discrete pieces of logic, even though I realize it isn't over 80 chars here if you put it on its own lines in the connect call. Would think that code is a candidate for reuse in your app as well.
It's also a good general habit to pull out functions so you don't accidentally make a function inside of a loop.[1]
And, of course, there's a chance that you still end up with insanely long strings, and have to do something like...
MongoClient.connect(
'mongodb://whoLetFredNameThisServerBecauseItsTooLong.FredsCompany.com:27017'
+ '/sampleDatabase',
dbErrHand
);
Good whitespace in nested code exacerbates the problem even more, which might +1 to Andy's idea of setting up variables like this outside of any loops/ifs/nested code. At some point, it might be worth turning maxlen off.
But bracket handling is one of the most subjective decisions there is, especially in JavaScript, where there's no sniff of a great, a priori answer. Some bristle like crazy at my parameter-per-line code, or would prefer the ( was on its own line, like this...
MongoClient.connect
(
'mongodb://localhost:27017/sampleDatabase',
dbErrHand
);
Surprisingly, JSLint still allows you plenty of room for self-expression! ;^)
[1] Nepotistic question link alert, though it was the first one I googled up. Probably an example of Google biasing my results for, um, me.
I would separate out the url into a new variable.
var url = 'mongodb://localhost:27017/sampleDatabase';
MongoClient.connect(url, function(err, database) {
if(err) {
throw err;
}
db = database;
});
I'm having trouble looping through a results set and then running another query for each result in order to attach some additional data.
I looked around on the Github issues for Sequelize.js and found that it's possible to use QueryChainer to achieve this. However, since I'm still a novice at this, I havent been able to figure out how to do so. Execution of my for loop happens asynchronously, resulting in a response being sent without the additional data.
Here's how I'm doing it right now:
// in blogs.js
var db = require('../models')
exports.findPosts = function(req, res) {
if (req.User) {
req.User.getPosts({
include: [
{ model: db.User, attributes: ['displayName', 'profilePicture'] },
{ model: db.Game, attributes: ['name'] }
]
}).success(function(posts) {
console.log('Starting the loop...')
for (var i in posts) {
db.PostLikes.count({ where: { PostId: i.id } }).success(function(likes) {
i.Likes = likes
console.log('Fetched Likes for post #' + i.id)
})
}
console.log('This should appear last!')
res.json(posts)
}).error(function(err) {
console.log(err)
res.send(404)
})
} else {
res.send(404)
}
}
The above code results in the response being sent without the Likes attribute appended to each post item. The console.logs appear out of order due to the nature of asynchronicity of the db.PostLikes.count() call.
It would be immensely helpful if someone could show me a way to use QueryChainer to achieve this.
To whoever's looking for a solution for this issue:
I resolved it by help from one of the maintainers of SequelizeJS over on Github. A part of my solution was to use the 'bluebird' library for Promises. It might be possible to use a different library, but I haven't tested my code with any other yet.
Here's a Gist of my code. I hope this helps anyone running into this issue :)
I'm new at javascript and I've hit a wall hard here. I don't even think this is a Sequelize question and probably more so about javascript behavior.
I have this code:
sequelize.query(query).success( function(row){
console.log(row);
}
)
The var row returns the value(s) that I want, but I have no idea how to access them other than printing to the console. I've tried returning the value, but it isn't returned to where I expect it and I'm not sure where it goes. I want my row, but I don't know how to obtain it :(
Using Javascript on the server side like that requires that you use callbacks. You cannot "return" them like you want, you can however write a function to perform actions on the results.
sequelize.query(query).success(function(row) {
// Here is where you do your stuff on row
// End the process
process.exit();
}
A more practical example, in an express route handler:
// Create a session
app.post("/login", function(req, res) {
var username = req.body.username,
password = req.body.password;
// Obviously, do not inject this directly into the query in the real
// world ---- VERY BAD.
return sequelize
.query("SELECT * FROM users WHERE username = '" + username + "'")
.success(function(row) {
// Also - never store passwords in plain text
if (row.password === password) {
req.session.user = row;
return res.json({success: true});
}
else {
return res.json({success: false, incorrect: true});
}
});
});
Ignore injection and plain text password example - for brevity.
Functions act as "closures" by storing references to any variable in the scope the function is defined in. In my above example, the correct res value is stored for reference per request by the callback I've supplied to sequelize. The direct benefit of this is that more requests can be handled while the query is running and once it's finished more code will be executed. If this wasn't the case, then your process (assuming Node.js) would wait for that one query to finish block all other requests. This is not desired. The callback style is such that your code can do what it needs and move on, waiting for important or processer heavy pieces to finish up and call a function once complete.
EDIT
The API for handling callbacks has changed since answering this question. Sequelize now returns a Promise from .query so changing .success to .then should be all you need to do.
According to the changelog
Backwards compatibility changes:
Events support have been removed so using .on('success') or .success()
is no longer supported. Try using .then() instead.
According this Raw queries documentation you will use something like this now:
sequelize.query("SELECT * FROM `users`", { type: sequelize.QueryTypes.SELECT})
.then(function(users) {
console.log(users);
});
I'm using a NodeJS module called node-github, which is a wrapper around the Github API, to get some statistics about certain users, such as their followers:
var getFollowers = function(user, callback) {
github.user.getFollowers(user, function(err, res) {
console.log("getFollowers", res.length);
callback(err, res);
});
};
...
getFollwers({user: mike}, function(err, followers) {
if(err) {
console.log(err);
}
else {
console.log(followers);
}
});
Apparently, Github limits call results to a maximum of 100 (via the per_page parameter), and utilizes the Link header to let you know a 'next page' of results exists.
The module I'm using provides several easy methods to handle the Link header, so you won't need to parse it. Basically, you can call github.hasNextPage(res) or github.getNextPage(res) (where res is the response you received from the original github.user.getFollowers() call)
What I'm looking for is the right approach/paradigm to having my function return all results, comprised of all pages. I dabbled a bit with a recursive function, and though it works, I can't help but feeling there may be a better approach.
This answer could serve as a good approach to handling all future Link header calls - not just Github's - if the standard catches on.
Thanks!
Finally, resorted to recursion (remember the 2 great weaknesses of recursion: maintaining it, and explaining it :)). Here's my current code, if anyone is interested:
var getFollowers = function(callback) {
var followers = []
, getFollowers = function(error, result) {
followers = followers.concat(result);
if(github.hasNextPage(result)) {
github.getNextPage(result, getFollowers);
}
else {
callback(error, followers);
}
};
github.user.getFollowers(options, getFollowers);
};
But, I found out that if you just need the total number of followers, you can use the getLastPage function to get the number of followers on the last page and then
total_num_of_followers = num_of_followers_on_last_page + (total_num_of_pages * results_per_page)