fs.readFileSync doesn't wait - javascript

fs.readFileSync(process.env.id_to_name, 'utf-8', function(err, data) {
if (err) throw err;
/*
a lot of stuff
*/
fs.mkdirSync(`clips`);
fs.writeFileSync(`clips/recap.json`, '{"players":[]}', 'utf8');
});
fs.readFileSync(`clips/recap.json`, 'utf-8', function(err, data) {
var info = JSON.parse(data);
info.players.push(/* stuff */);
fs.writeFileSync(`clips/recap.json`, JSON.stringify(info), 'utf8', function (err) { });
});
I don't know what I'm doing wrong here.
The second fs.readFileSync just doesn't wait for the first one to end so it doesn't find the file he should read.

You're using fs.readFileSync() incorrectly. It does not accept a callback as an argument and does not call a callback. See doc here.
I don't know if you meant to show us fs.readFile() that does accept a callback or not.
fs.readFileSync() returns its result directly (in a synchronous fashion) from the function call as in:
let data = fs.readFileSync(someFileName, someEncoding);
It does not use a callback and it throws an exception if there's an error reading the file.
If you meant to show us an example using fs.readFile(), then it's a non-blocking, asynchronous call. If you want your second file read to wait until the first one is done, you would have to put the second file read INSIDE the completion callback of the first.
Also, please never write code like if (err) throw err; inside an asynchronous callback. That's a pointless piece of code that does nothing useful as nothing can catch that throw and you have no way to communicate back errors. It is most unfortunate that nodejs documentation shows that regularly in its examples as real world code should never do that. You will need to write real error handling where you either handle the error in some way and continue or you abort the process or you communicate back the error (probably with a callback) so the calling code can handle and see the error. Exceptions throw in asynchronous callbacks do NOT propagate back to the caller. They end up going back to the bowels of the file system code where the callback was triggered from where you cannot catch or handle them.
If you really mean to be using all synchronous calls, then you would write your code like this:
try {
let data1 = fs.readFileSync(process.env.id_to_name, 'utf-8');
// other code here
fs.mkdirSync(`clips`);
fs.writeFileSync(`clips/recap.json`, '{"players":[]}', 'utf8');
let data2 = fs.readFileSync(`clips/recap.json`, 'utf-8');
var info = JSON.parse(data2);
info.players.push(/* stuff */);
fs.writeFileSync(`clips/recap.json`, JSON.stringify(info));
} catch(err) {
// handle errors here
console.log(err);
}
Note that this code can likely only be run once without error because fs.mkdirSync('clips') will fail unless you set the recursive flag.
Hint, you can use require() to load and parse a JSON file in one step so you don't have to read it and then parse it into a Javascript object.

Related

Why does node.js look for error as the first parameter in a callback

I have the following simple code.. however the "data" variable doesn't return the contents of input.txt.
var fs = require("fs");
fs.readFile('input.txt', function (data) {
console.log(data.toString());
});
console.log("Program Ended");
The code below works because node.js reads the first parameter, err, and the input.txt contents come from the 2nd parameter
var fs = require("fs");
fs.readFile('input.txt', function (err, data) {
if (err) return console.error(err);
console.log(data.toString());
});
console.log("Program Ended");
Is this just a node.js thing to look for the error in the first parameter? What if I did not want to check for an error in the callback function?
It's convention to pass error as the first parameter to a callback function. It's first to prevent it from being ignored. You don't have to check it, of course, but if there is an error it's likely that your data is bad or meaningless anyway.
The reason that fs.readFile('input.txt', function (data) { doesn't work is that the error is passed into your data variable, since it is the first parameter. What you actually name the parameters doesn't matter, the parameter order is decided by fs.readFile.

Request functions are executing out of order in node.js

I am trying to make a scraper, but I cant seem to get the code to execute in the right order. I need the album/albumart request function to execute after the title and artist function. I know node.js is weird about this sort of thing, but I've tried moving things all over and still no luck.
Here's the Code
Please pardon the mess and excess debug code.
Current output:
TESTED!!!
req
No error
Pentemple - Pazuzu 2
Now Playing: Pentemple - Pazuzu 2
10
Pentemple
10
Pentemple
1
{ artist: '',
title: '',
album: '',
albumArt: '',
testval: 'TESTED!!!' }
xtest
Because of the asynchronous request calls, the responses might not be in order therefore to keep the order, you will need to make next request call in previous request's callback. Below is the example for the same -
request(url1, function(err, res, html){
if(!err)
{
// url1 successfully returned , call another dependent url
request(url2, function(err2, res2, html2){
if(!err2)
{
// url2 successfully returned, go on with another request call and so on ...
}
});
}
else
{
// first call failed, return gracefully here --
callback(err); // if you have any
}
})
However, as suggested in earlier answer as well, this is anti pattern and will result in messy and cluttered code known as pyramid of doom or callback hell.
I would suggest going with wonderful async npm module and then the same code can be written as -
var async = require('async');
async.waterfall([
function(callback) {
request(url1, function(error, res, html){
callback(null, res, html);
});
},
function(res1, html1, callback) {
request(url1, function(error, res, html){
callback(null, res1, html1, res, html);
});
} // ... AND SO ON
], function (err, result) {
// the result contains the response sent by the last request callback
if(!err)
{
// use your data
}
});
JavaScript is asynchronous. If the requests are dependent on each other I recommend using callbacks so that when one request is complete, it calls the next one.
In most cases performing a request in Javascript has an asynchronous nature. That means that requests do not block the entire process. To perform and action when the request is done callbacks are used. Callbacks are functions that are added into the event loop queue once the request gets in finished state. The easiest way (but for sure not the best one) to make reqeust run one after another is to call second request in the first callback, third request in seconds callback and so on.
request(profileurl, function (error, response, html) {
console.log("req");
if (!error) {
// ...
request(albumurl, function (error, response, html) {
if (!error) {
// ...
request(albumurl, function (error, response, html) {
// ...
});
});
} else {
console.log("ERROR: " + error);
}
});
But such a practice is considered to be anti-pattern and is called Pyramid of Doom, because nesting callbacks make the code unreadable, hard to to test and hard to maintain.
Good practice is considered to use promises. They come "in box" with ES2015. But if you use ES5, you should use some additional module for them like: request-promise or Q.

Cannot read file with nodejs

I use the following code to read a file from my desktop. When I run the server and use some request I don't see anything in the debugger.
What am I missing here?
fs = require('fs');
fs.readFile('‪C:\Users\i123\Desktop\test.txt', 'utf8', function (err,data) {
if (err) {
return console.log(err);
}
console.log(data);
res.send(data);
});
It's hard to know all the things that might be wrong here since you only show a small piece of your code, but one thing that is wrong is the filename string. The \ character in Javascript is an escape mechanism so that the string '‪C:\Users\i123\Desktop\test.txt' is not what you want it to be. If you really need backslashes in the string for a Windows filename, then you would need to use this:
'‪C:\\Users\\i123\\Desktop\\test.txt'
Other things I notice about your code:
Returning a value from the readFile() callback does nothing. It just returns a value back into the bowels of the async file I/O which does nothing.
When you get a file error, you aren't doing anything with the res which presumably means this route isn't doing anything and the browser will just be left waiting.

Call a function in node.js

I am new to node.js .Here I write a sample function in node.js to print the contents of a json file as follows.
exports.getData =function(callback) {
readJSONFile("Conf.json", function (err, json) {
if(err) { throw err; }
console.log(json);
});
console.log("Server running on the port 9090");
What I am doing here is I just want to read a json file and print the contents in console. But I do not know how to call the getData function. While running this code it only prints the sever running on the port..", not myjson` contents.
I know the above code is not correct
How can I call a function in node.js and print the json contents?
Node.js is just regular javascript. First off, it seems like you are missing a }. Since it makes the question easier to understand, I will assume that your console.log("Server... is outside exports.getData.
You would just call your function like any other:
...
console.log("Server running on the port 9090");
exports.getData();
I would note that you have a callback argument in your getData function but you are not calling it. Perhaps it is meant to be called like so:
exports.getData = function(callback) {
readJSONFile("Conf.json", function (err, json) {
if(err) { throw err; }
callback(json);
});
}
console.log("Server running on the port 9090");
exports.getData(function (json) {
console.log(json);
});
Truthfully, your getData function is a little redundant without any more content to it since it does nothing more than just wrap readJSONFile.
Don't take this the wrong way, but your code appears to be a mixed up mess of unrelated examples. I recommend you start by learning the basics of JavaScript and node.js (for example, read Eloquent JavaScript and Felix's Node.js Beginners Guide).
But on to your code. First of all, you are creating a function (called getData) and exporting it. Then you're printing "Server running on the port 9090". There is no server code in your script, and the function you created is never executed.
I think this is what you intended to write:
readJSONFile("Conf.json", function (err, json) {
if(err) { throw err; }
console.log(json);
});
Assuming that readJSONFile is a real function.

error handling in asynchronous node.js calls

I'm new to node.js although I'm pretty familiar with JavaScript in general. My question is regarding "best practices" on how to handle errors in node.js.
Normally when programming web servers, FastCGI servers or web pages in various languages I'm using Exceptions with blocking handlers in a multi-threading environment. When a request comes in I usually do something like this:
function handleRequest(request, response) {
try {
if (request.url=="whatever")
handleWhateverRequest(request, response);
else
throw new Error("404 not found");
} catch (e) {
response.writeHead(500, {'Content-Type': 'text/plain'});
response.end("Server error: "+e.message);
}
}
function handleWhateverRequest(request, response) {
if (something)
throw new Error("something bad happened");
Response.end("OK");
}
This way I can always handle internal errors and send a valid response to the user.
I understand that with node.js one is supposed to do non-blocking calls which obviously leads to various number of callbacks, like in this example:
var sys = require('sys'),
fs = require('fs');
require("http").createServer(handleRequest).listen(8124);
function handleRequest(request, response) {
fs.open("/proc/cpuinfo", "r",
function(error, fd) {
if (error)
throw new Error("fs.open error: "+error.message);
console.log("File open.");
var buffer = new require('buffer').Buffer(10);
fs.read(fd, buffer, 0, 10, null,
function(error, bytesRead, buffer) {
buffer.dontTryThisAtHome(); // causes exception
response.end(buffer);
}); //fs.read
}); //fs.open
}
This example will kill the server completely because exceptions aren't being catched.
My problem is here that I can't use a single try/catch anymore and thus can't generally catch any error that may be raised during the handling of the request.
Of course I could add a try/catch in each callback but I don't like that approach because then it's up to the programmer that he doesn't forget a try/catch. For a complex server with lots of different and complex handlers this isn't acceptable.
I could use a global exception handler (preventing the complete server crash) but then I can't send a response to the user since I don't know which request lead to the exception. This also means that the request remains unhandled/open and the browser is waiting forever for a response.
Does someone have a good, rock solid solution?
Node 0.8 introduces a new concept called "Domains". They are very roughly analogousness to AppDomains in .net and provide a way of encapsulating a group of IO operations. They basically allow you to wrap your request processing calls in a context specific group. If this group throws any uncaught exceptions then they can be handled and dealt with in a manner which gives you access to all the scope and context specific information you require in order to successfully recover from the error (if possible).
This feature is new and has only just been introduced, so use with caution, but from what I can tell it has been specifically introduced to deal with the problem which the OP is trying to tackle.
Documentation can be found at: http://nodejs.org/api/domain.html
Checkout the uncaughtException handler in node.js. It captures the thrown errors that bubble up to the event loop.
http://nodejs.org/docs/v0.4.7/api/process.html#event_uncaughtException_
But not throwing errors is always a better solution. You could just do a return res.end('Unabled to load file xxx');
This is one of the problems with Node right now. It's practically impossible to track down which request caused an error to be thrown inside a callback.
You're going to have to handle your errors within the callbacks themselves (where you still have a reference to the request and response objects), if possible. The uncaughtException handler will stop the node process from exiting, but the request that caused the exception in the first place will just hang there from the user point of view.
Very good question. I'm dealing with the same problem now. Probably the best way, would be to use uncaughtException. The reference to respone and request objects is not the problem, because you can wrap them into your exception object, that is passed to uncaughtException event. Something like this:
var HttpException = function (request, response, message, code) {
this.request = request;
this.response = response;
this.message = message;
this.code = code || 500;
}
Throw it:
throw new HttpException(request, response, 'File not found', 404);
And handle the response:
process.on('uncaughtException', function (exception) {
exception.response.writeHead(exception.code, {'Content-Type': 'text/html'});
exception.response.end('Error ' + exception.code + ' - ' + exception.message);
});
I haven't test this solution yet, but I don't see the reason why this couldn't work.
I give an answer to my own question... :)
As it seems there is no way around to manually catch errors. I now use a helper function that itself returns a function containing a try/catch block. Additionally, my own web server class checks if either the request handling function calls response.end() or the try/catch helper function waitfor() (raising an exception otherwise). This avoids to a great extent that request are mistakenly left unprotected by the developer. It isn't a 100% error-prone solution but works well enough for me.
handler.waitfor = function(callback) {
var me=this;
// avoid exception because response.end() won't be called immediately:
this.waiting=true;
return function() {
me.waiting=false;
try {
callback.apply(this, arguments);
if (!me.waiting && !me.finished)
throw new Error("Response handler returned and did neither send a "+
"response nor did it call waitfor()");
} catch (e) {
me.handleException(e);
}
}
}
This way I just have to add a inline waitfor() call to be on the safe side.
function handleRequest(request, response, handler) {
fs.read(fd, buffer, 0, 10, null, handler.waitfor(
function(error, bytesRead, buffer) {
buffer.unknownFunction(); // causes exception
response.end(buffer);
}
)); //fs.read
}
The actual checking mechanism is a little more complex, but it should be clear how it works. If someone is interested I can post the full code here.
One idea: You could just use a helper method to create your call backs and make it your standard practice to use it. This does put the burden on the developer still, but at least you can have a "standard" way of handling your callbacks such that the chance of forgetting one is low:
var callWithHttpCatch = function(response, fn) {
try {
fn && fn();
}
catch {
response.writeHead(500, {'Content-Type': 'text/plain'}); //No
}
}
<snipped>
var buffer = new require('buffer').Buffer(10);
fs.read(fd, buffer, 0, 10, null,
function(error, bytesRead, buffer) {
callWithHttpCatch(response, buffer.dontTryThisAtHome()); // causes exception
response.end(buffer);
}); //fs.read
}); //fs.open
I know that probably isn't the answer you were looking for, but one of the nice things about ECMAScript (or functional programming in general) is how easily you can roll your own tooling for things like this.
At the time of this writing, the approach I am seeing is to use "Promises".
http://howtonode.org/promises
https://www.promisejs.org/
These allow code and callbacks to be structured well for error management and also makes it more readable.
It primarily uses the .then() function.
someFunction().then(success_callback_func, failed_callback_func);
Here's a basic example:
var SomeModule = require('someModule');
var success = function (ret) {
console.log('>>>>>>>> Success!');
}
var failed = function (err) {
if (err instanceof SomeModule.errorName) {
// Note: I've often seen the error definitions in SomeModule.errors.ErrorName
console.log("FOUND SPECIFIC ERROR");
}
console.log('>>>>>>>> FAILED!');
}
someFunction().then(success, failed);
console.log("This line with appear instantly, since the last function was asynchronous.");
Two things have really helped me solve this problem in my code.
The 'longjohn' module, which lets you see the full stack trace (across multiple asyncronous callbacks).
A simple closure technique to keep exceptions within the standard callback(err, data) idiom (shown here in CoffeeScript).
ferry_errors = (callback, f) ->
return (a...) ->
try f(a...)
catch err
callback(err)
Now you can wrap unsafe code, and your callbacks all handle errors the same way: by checking the error argument.
I've recently created a simple abstraction named WaitFor to call async functions in sync mode (based on Fibers): https://github.com/luciotato/waitfor
It's too new to be "rock solid".
using wait.for you can use async function as if they were sync, without blocking node's event loop. It's almost the same you're used to:
var wait=require('wait.for');
function handleRequest(request, response) {
//launch fiber, keep node spinning
wait.launchFiber(handleinFiber,request, response);
}
function handleInFiber(request, response) {
try {
if (request.url=="whatever")
handleWhateverRequest(request, response);
else
throw new Error("404 not found");
} catch (e) {
response.writeHead(500, {'Content-Type': 'text/plain'});
response.end("Server error: "+e.message);
}
}
function handleWhateverRequest(request, response, callback) {
if (something)
throw new Error("something bad happened");
Response.end("OK");
}
Since you're in a fiber, you can program sequentially, "blocking the fiber", but not node's event loop.
The other example:
var sys = require('sys'),
fs = require('fs'),
wait = require('wait.for');
require("http").createServer( function(req,res){
wait.launchFiber(handleRequest,req,res) //handle in a fiber
).listen(8124);
function handleRequest(request, response) {
try {
var fd=wait.for(fs.open,"/proc/cpuinfo", "r");
console.log("File open.");
var buffer = new require('buffer').Buffer(10);
var bytesRead=wait.for(fs.read,fd, buffer, 0, 10, null);
buffer.dontTryThisAtHome(); // causes exception
response.end(buffer);
}
catch(err) {
response.end('ERROR: '+err.message);
}
}
As you can see, I used wait.for to call node's async functions in sync mode,
without (visible) callbacks, so I can have all the code inside one try-catch block.
wait.for will throw an exception if any of the async functions returns err!==null
more info at https://github.com/luciotato/waitfor
Also in synchronous multi-threaded programming (e.g. .NET, Java, PHP) you can't return any meaningful information to the client when a custom unkown Exception is caught. You may just return HTTP 500 when you have no info regarding the Exception.
Thus, the 'secret' lies in filling a descriptive Error object, this way your error handler can map from the meaningful error to the right HTTP status + optionally a descriptive result. However you must also catch the exception before it arrives to process.on('uncaughtException'):
Step1: Define a meaningful error object
function appError(errorCode, description, isOperational) {
Error.call(this);
Error.captureStackTrace(this);
this.errorCode = errorCode;
//...other properties assigned here
};
appError.prototype.__proto__ = Error.prototype;
module.exports.appError = appError;
Step2: When throwing an Exception, fill it with properties (see step 1) that allows the handler to convert it to meannigul HTTP result:
throw new appError(errorManagement.commonErrors.resourceNotFound, "further explanation", true)
Step3: When invoking some potentially dangerous code, catch errors and re-throw that error while filling additional contextual properties within the Error object
Step4: You must catch the exception during the request handling. This is easier if you use some leading promises library (BlueBird is great) which allows you to catch async errors. If you can't use promises than any built-in NODE library will return errors in callback.
Step5: Now that your error is caught and contains descriptive information about what happens, you only need to map it to meaningful HTTP response. The nice part here is that you may have a centralized, single error handler that gets all the errors and map these to HTTP response:
//this specific example is using Express framework
res.status(getErrorHTTPCode(error))
function getErrorHTTPCode(error)
{
if(error.errorCode == commonErrors.InvalidInput)
return 400;
else if...
}
You may other related best practices here

Categories

Resources