I am using node js express. I am trying to access a text file located in the same directory of that js file. So the file structure goes like this
- ProjectFolder
|
- many modules and folders
- routes
|
- Index.js
- input.txt
The Simple code that i have tried is ,
var data = fs.readFile('~/IdeaProjects/Title/routes/input.txt');
console.log("Synchronous read: " + data.toString());
console.log("Program Ended");
I did try different paths but nothing works. for your information, I am using fedora as os.
The error i got was,
Error: ENOENT: no such file or directory, open '~/IdeaProjects/Title/routes/input.txt'
at Error (native)
Any suggestion about how to access that file so that i can both read and write the contents of the file , will be welcomed. Looking for detailed answer.
Node doesn't interpret some characters that have special meaning like ~ or shell variables like $HOME, so you will need to use something like path.resolve() to get an absolute path or use a relative path (e.g. IdeaProjects/Title/routes/input.txt).
Also, as #Gothdo pointed out, there is a discrepency in the filename which will cause issues on case-sensitive file systems.
You will also need to either change fs.readFile() to fs.readFileSync() or add a callback to fs.readFile() like so:
fs.readFile('~/IdeaProjects/Title/routes/input.txt', function(err, data) {
if (err) throw err;
console.log("Synchronous read: " + data.toString());
console.log("Program Ended");
});
Related
I am trying to create a new JSON file. Whenever I try to use the writeFile function, it just says there is no such directory open.
This is the code I've tried.
fs.writeFile('./UserData/' + msg.author + ".json", JSON.stringify({firstSplitContent: firstSplit,secondSplitContent: secondSplit,thirdSplitContent: thirdSplit,fourthSplitContent: fourthSplit,fifthSplitContent: fifthSplit},null,4), err => {
if(err) throw err;
console.log("File is created successfully.")
});
Error: ENOENT: no such file or directory, open './UserData/<#301910353967710208>.json'
help would be appreciated. Yes I know that windows wont use those special characters. I switched it to author.id to remove.
fs.writeFile won't create any directories that don't already exist.
I suspect that the directory in your path, UserData, doesn't exist, which is why you are getting that error.
Otherwise, your path may be wrong. Since you are using a relative path, the code will look for the UserData folder in the working directory, ie. where the code is currently executing.
If the path is correct, try creating it first:
if(!fs.existsSync('./UserData') {
fs.mkdirSync('./UserData')
}
Or, you could use a package such as fs-extra that gives you the capability of creating any folders in the path that don't already exist.
This is probably the most relevant function to you, if you want to use fs-extra:
https://github.com/jprichardson/node-fs-extra/blob/master/docs/outputJson.md
You should follow the answer of Willianm but just one thing I noticed: You are trying to use msg.author in your file name, but that's an object (a really big object), you might want to consider changing it to message.author.id
I am copying a .jpg file to other directory(C:\myFaceApp\dropbox\exprtedFaces)
My source File : C:/myFaceApp/dropbox/faces/Monika1/1404039d-2be3-43bc-b20b-35c0f4a5954b/1404039d-2be3-43bc-b20b-35c0f4a5954b_00-00-04_crop.jpg
I am using following block of code to copy
targetPath=opts.exportDir; //C:\myFaceApp\dropbox\exprtedFaces
fs.createReadStream(req.query.facePath).pipe(fs.createWriteStream(targetPath));
res.write(JSON.stringify({ OK: 1 }));
res.end();
I am getting an error like this:
Error: EISDIR: illegal operation on a directory, open 'C:\myFaceApp\dropbox\exprtedFaces'
Your problem is that you are attempting to write to a directory not a file. createWriteStream takes a filename as it's argument. Try this instead:
fs.createReadStream(req.query.facePath).pipe(fs.createWriteStream(path.join(targetPath ,"file.jpg")));
You should ofc give it a non hard coded name, this is just an example. Have a look at the path module for that.
I am trying to save a simple file to a directory other then my root directory, but it seems to error out when using a variable instead of a direct string.Has anyone tried using a variable or see if i am missing something.
Thanks
var procedures = "SomeString";
var moveTo = "C:/SavedFiles";
//////////////////
fs.writeFile(moveTo ,procedures, function (err) {
if (err) {
return console.log(err);
}
console.log("The file was saved!");
});`
Am I correct that "C:/SavedFiles" is a directory? Is so, then specify a file to write to like:
var moveTo = "C:/SavedFiles/myfile.txt";
// keep the rest the same
Not sure about the slashes though, not running Windows ATM.
So it looks like my error was due to a rouge loop above my code not posted. Sorry i didn't post the rest of the code
I have a Node.js script that reads the contents of a file, does some transformations on its contents, and logs the output:
var transformer = require('./transformer'),
fs = require('fs'),
file = process.argv[2];
if (!file) {
throw 'no file specified\n';
}
fs.readFile(file, 'utf-8', function (err, data) {
if (err) {
throw err;
}
transformer.transform(data, function (text) {
console.log(text);
});
});
This works fine:
$ node transform.js myfile.txt
And this works:
$ node transform.js myfile.txt > anotherfile.txt
But, when I try to redirect the output to the same file I'm reading from, the file becomes blank:
$ node transform.js myfile.txt > myfile.txt
Same thing using tee:
$ node transform.js myfile.txt | tee myfile.txt
Curiously, this works:
$ node transform.js myfile.txt >> myfile.txt
But I don't want to append to the file - I want to overwrite its contents.
I think the problem is, since fs.readFile is asynchronous, console.log is called asynchronously as well - i.e., it gets chunks of data as opposed to all the data at once. I think I can use fs.readFileSync instead, but what's the right way to handle this?
The issue is not actually within Node but in the shell. When you redirect with >, the first thing the shell does is open the file for writing, emptying the file. Your program goes to read from that empty file and, in your case, empty input means empty output.
This too will result in an empty file regardless of the initial contents of myfile.txt:
$ cat myfile.txt > myfile.txt
One solution would be to write the file inside the Node script rather than using redirection. You're already specifying and reading the file there, so why not specify an output file in argv as well and write to it rather than using shell redirection? Just take care to structure your code so that reading and writing to the same file works.
As #slebetman notes in a comment, another solution is cat myfile.txt > tmp; mv tmp myfile.txt (or my preferred: cat myfile.txt > tmp && mv tmp myfile.txt).
The problem is
you're opening the file for read,
then opening the file for write (emptying it),
then reading from an empty file.
transform nothing
write nothing
What I think you want instead is to:
open for read
read and buffer
transform
open for write
write
There's a couple ways to do this:
1) Read the file synchronously. Node.js 0.12 supports this.
var transformer = require('./transformer'),
fs = require('fs'),
file = process.argv[2];
if (!file) {
throw 'no file specified\n';
}
fs.readFileSync(file, 'utf-8', function (err, data) {
if (err) {
throw err;
}
transformer.transform(data, function (text) {
console.log(text);
});
});
2) Use "streams"
This is really the best way. Especially if you're wanting to learn Node.js
The best way I know to learn about streams is from NodeSchool: http://nodeschool.io/#workshoppers Try the stream-adventure.
By the end, you'll own these kinds of problems.
Good luck!
I am currently building a project with node.js in Windows. I am using a batch file to assemble resources and build jade templates via the command line. With Jade, I am using the switch -o to defines a JS object that fills localized content in the template
For awhile, everything worked nicely. However, changes to my JSON lookup have resulted in an error:
"The input line is too long"
Researching the error, I found that windows shell has a limit on how long your lines can be. Unfortunately, I need the whole lookup object for my project. However, I started wondering if jade can accept a path to my lookup file instead of a string with the contents of the file. Currently, I'm building the contents into a variable and calling jade with that ala:
SetLocal EnableDelayedExpansion
set content=
for /F "delims=" %%i in (%sourcedir%\assets\english.json) do set content=!content! %%i
::use the json file as a key for assembling the jade templates
call jade %sourcedir% --out %destdir% -o"%content%"
EndLocal
If I could use a path to the lookup file, it would be much easier. However, I am usure how to do that (if it's even possible). and Jade's documentation is a bit lacking.
So, in short, is it possible for Jade to accept a filepath to a JS object rather than a string containing the object? Is there a better way to contruct the jade call that wont push it past the limit?
Write a node.js script that will read your "assets" and will call a jade. Something like:
var fs = require('fs'),
_ = require('underscore'),
async = require('async');
var sourceDir = 'path to the directory with your jade templates',
destinationDir = 'path to the directory where you want the result html files to be contained in';
async.waterfall([
async.parallel.bind(null, {
serializedData: fs.readFile.bind(null, 'assets/english.json'),
files: fs.readDir.bind(null, sourceDir),
}),
function (result, callback) {
var data = JSON.parse(result.serializedData),
files = result.files;
async.parallel(_.map(files, function (file) {
return async.waterfall.bind(null, [
fs.readFile.bind(null, sourceDir + file),
function (jadeSource, callback) {
process.nextTick(callback.bind(null, jade.compile(jadeSource)(data)));
},
fs.writeFile.bind(null, destinationDir + file)
]);
}), callback);
}
], function (err) {
if (err) {
console.log("An error occured: " + err);
} else {
console.log("Done!");
}
});
Then in your batch file call this script directly, instead of enumerating the directory and calling the jade manually.
It will not only solve your problem, but also work much faster because:
I/O operations are done in parallel;
Node.js is only started once during the build process, as opposed to starting it for every single file as you do now.