I am currently building a project with node.js in Windows. I am using a batch file to assemble resources and build jade templates via the command line. With Jade, I am using the switch -o to defines a JS object that fills localized content in the template
For awhile, everything worked nicely. However, changes to my JSON lookup have resulted in an error:
"The input line is too long"
Researching the error, I found that windows shell has a limit on how long your lines can be. Unfortunately, I need the whole lookup object for my project. However, I started wondering if jade can accept a path to my lookup file instead of a string with the contents of the file. Currently, I'm building the contents into a variable and calling jade with that ala:
SetLocal EnableDelayedExpansion
set content=
for /F "delims=" %%i in (%sourcedir%\assets\english.json) do set content=!content! %%i
::use the json file as a key for assembling the jade templates
call jade %sourcedir% --out %destdir% -o"%content%"
EndLocal
If I could use a path to the lookup file, it would be much easier. However, I am usure how to do that (if it's even possible). and Jade's documentation is a bit lacking.
So, in short, is it possible for Jade to accept a filepath to a JS object rather than a string containing the object? Is there a better way to contruct the jade call that wont push it past the limit?
Write a node.js script that will read your "assets" and will call a jade. Something like:
var fs = require('fs'),
_ = require('underscore'),
async = require('async');
var sourceDir = 'path to the directory with your jade templates',
destinationDir = 'path to the directory where you want the result html files to be contained in';
async.waterfall([
async.parallel.bind(null, {
serializedData: fs.readFile.bind(null, 'assets/english.json'),
files: fs.readDir.bind(null, sourceDir),
}),
function (result, callback) {
var data = JSON.parse(result.serializedData),
files = result.files;
async.parallel(_.map(files, function (file) {
return async.waterfall.bind(null, [
fs.readFile.bind(null, sourceDir + file),
function (jadeSource, callback) {
process.nextTick(callback.bind(null, jade.compile(jadeSource)(data)));
},
fs.writeFile.bind(null, destinationDir + file)
]);
}), callback);
}
], function (err) {
if (err) {
console.log("An error occured: " + err);
} else {
console.log("Done!");
}
});
Then in your batch file call this script directly, instead of enumerating the directory and calling the jade manually.
It will not only solve your problem, but also work much faster because:
I/O operations are done in parallel;
Node.js is only started once during the build process, as opposed to starting it for every single file as you do now.
Related
I'm building a script that reads log files, handles what needs to be handled then writes them to a database
Some caveats :
Some log files have a lot of input, multiple times a second
Some log files have few to no input at all
What I try in simple words:
Reading the first line of a file, then deleting this line to go to the next one, while I handle the first line, other lines could be added..
Issues I'm facing
When I try reading a file then processing it, then deleting the
files, some lines have been added
When the app crashes while
handling multiple lines at once for any reason, I can't know what
lines have been processed.
Tried so far
fs.readdir('logs/', (err, filenames) => {
filenames.forEach((filename) => {
fs.readFile('logs/'+filename, 'utf-8', (err, content) => {
//processing all new lines (can take multiple ms)
//deleting file
fs.unlink('logs/'+filename)
});
});
});
Is there not a (native or not) method to 'take' first line(s), or take all lines, from a file at once?
Something similar to what the Array.shift() method does to arrays..
Why you are reading the file at once. Instead you can use the node.js streams.
https://nodejs.org/api/fs.html#fs_class_fs_readstream
This will read the files and output to console
var fs = require('fs');
var readStream = fs.createReadStream('myfile.txt');
readStream.pipe(process.stdout);
You can also go for the npm package node-tail to read the content of a files while new content written to it.
https://github.com/lucagrulla/node-tail
If your log files has been writen as rotate logs. Example: Each hours has each log file, 9AM.log, 10AM.log....When you process the log files, you can skip current file and process another files. ex: now is 10:30 AM o’clock, skip file 10AM.log, solve another files.
I'm starting to use Pug templating
I've got a directory with some markdown files that I'd like to turn into HTML pages
legal/
privacy-policy.md
refund-policy.md
terms-of-service.md
So far, I've thought to place a .pug file alongside each
legal/
privacy-policy.md
+ privacy-policy.pug
refund-policy.md
+ refund-policy.pug
terms-of-service.md
+ terms-of-service.pug
The .pug files are all very similar, "boilerplatey" -- they extend the same template:
extends ../layout.pug
block lead
title Privacy Policy
block content
.content
include:markdown-it(linkify) privacy-policy.md
This works for now, but it clearly doesn't scale
How can I do better than this? What's the best way to iterate the .pug boilerplate over each markdown file in the directory?
I'd move all the code to do this back into the route (i.e. into node.js itself), where you have access to the fs file system library.
To list all the documents do something like this:
router.get('/', function(req, res){
var folder = 'legal';
var fs= require('fs');
var fileList = [];
fs.readdir(folder, function(err, files){
files.forEach(function(file){
fileList.push(file);
});
})
res.render('directory', {"files": fileList});
}
Then to read a file in use a url parameter:
router.get('/:fileName', function(req, res){
var fileName = 'legal/' + req.params.fileName + '.md';
var fs= require('fs');
fs.readFile(fileName, 'utf8', function(err, data) {
if (err) throw err;
res.render('contract', {"title": req.params.fileName, "text": data});
});
In your pug template for the contracts just output the variable as rendered using interpolation:
!{text}
You could also get fancy with the title using something like SugarJS titleize which will dynamically create a proper title from the file name.
With just one Pug, this is not feasible. At least it didn't work out for me.
The easiest way to use Pug to convert a directory of Markdown files into HTML pages is to connect the [MdPugToHtml] converter to your project (https://www.npmjs.com/package/md-pug-to-html ). It massively converts Markdown to html. Moreover, it is possible to use Pug templates, but you can use them without templates.
The converter has various settings and can be used both in the CLI command line and has an API for use in applications.
There is detailed documentation on the MdPugToHtml converter in English and Russian languages.
The converter can also be used independently. You can manually convert entire directories with Markdown files. The conversion is performed in the terminal with just one command:
npx md-pug-to-html /home/content
where:
npx is an npm command that installs md-pug-to-html at the first launch, and then launches the md-pug-to-html converter.
/home/content is the directory with your Markdown files. You may have another one.
I am very new to Node server/javacsript. So I am sorry if this might be stupid
question/topic.
I intended to create a very simple solution to open JSON file, load to list, and save it back to my local disk (running node.js server).
Could you please help me out, what I am doing wrong? I am running app in browser using react.
index.js containing
var fs = require('fs');
var fileName = './test.json';
var file = require('./test.json');
alert(file.name + " " + file.age);
file.name = "Peter";
alert(file.name + " " + file.age);
fs.writeFile('./test.json', JSON.stringify(file), function (err) {
if (err) return alert(err);
console.log(JSON.stringify(file));
alert('writing to ' + fileName);
});
Before I was not even able to open JSON file. I needed to include this property into the webpack config file.
node: {
fs: 'empty'
}
Now I am able to open JSON file, change it virtually, but unable to save it.
In chrome developer tools, it prints "fs.writeFile is not a function" into console.
Thank you very much.
When you included the property in your webpack config
node: {
fs: 'empty'
}
You told webpack that the module fs should just be an empty object. You can confirm this by simply putting a console.log(fs) in your file to see it is indeed empty.
Beyond that, fs is not going to work in your browser. fs expects the node.js runtime (which includes non-JavaScript things in order to make it work), not your browser's runtime.
If you want a user to save a file, you'll have to use a browser based saving solution. You won't be able to just arbitrarily write files like that outside of something like your browser's local storage.
I have a Node.js script that reads the contents of a file, does some transformations on its contents, and logs the output:
var transformer = require('./transformer'),
fs = require('fs'),
file = process.argv[2];
if (!file) {
throw 'no file specified\n';
}
fs.readFile(file, 'utf-8', function (err, data) {
if (err) {
throw err;
}
transformer.transform(data, function (text) {
console.log(text);
});
});
This works fine:
$ node transform.js myfile.txt
And this works:
$ node transform.js myfile.txt > anotherfile.txt
But, when I try to redirect the output to the same file I'm reading from, the file becomes blank:
$ node transform.js myfile.txt > myfile.txt
Same thing using tee:
$ node transform.js myfile.txt | tee myfile.txt
Curiously, this works:
$ node transform.js myfile.txt >> myfile.txt
But I don't want to append to the file - I want to overwrite its contents.
I think the problem is, since fs.readFile is asynchronous, console.log is called asynchronously as well - i.e., it gets chunks of data as opposed to all the data at once. I think I can use fs.readFileSync instead, but what's the right way to handle this?
The issue is not actually within Node but in the shell. When you redirect with >, the first thing the shell does is open the file for writing, emptying the file. Your program goes to read from that empty file and, in your case, empty input means empty output.
This too will result in an empty file regardless of the initial contents of myfile.txt:
$ cat myfile.txt > myfile.txt
One solution would be to write the file inside the Node script rather than using redirection. You're already specifying and reading the file there, so why not specify an output file in argv as well and write to it rather than using shell redirection? Just take care to structure your code so that reading and writing to the same file works.
As #slebetman notes in a comment, another solution is cat myfile.txt > tmp; mv tmp myfile.txt (or my preferred: cat myfile.txt > tmp && mv tmp myfile.txt).
The problem is
you're opening the file for read,
then opening the file for write (emptying it),
then reading from an empty file.
transform nothing
write nothing
What I think you want instead is to:
open for read
read and buffer
transform
open for write
write
There's a couple ways to do this:
1) Read the file synchronously. Node.js 0.12 supports this.
var transformer = require('./transformer'),
fs = require('fs'),
file = process.argv[2];
if (!file) {
throw 'no file specified\n';
}
fs.readFileSync(file, 'utf-8', function (err, data) {
if (err) {
throw err;
}
transformer.transform(data, function (text) {
console.log(text);
});
});
2) Use "streams"
This is really the best way. Especially if you're wanting to learn Node.js
The best way I know to learn about streams is from NodeSchool: http://nodeschool.io/#workshoppers Try the stream-adventure.
By the end, you'll own these kinds of problems.
Good luck!
I'm building a web app using Node.JS that at the very least should allow users to to upload excel spreadsheets (.xlsx) and then using an excel parser (currently using node-xlsx - https://www.npmjs.org/package/node-xlsx), I want to be able to find this file, parse it, and print its contents to the console. So far, I have the file uploaded and stored, but am having trouble specifying the file path my app should search down.
I believe my troubles are that I am trying to do this on the server-side, and I am telling my app to search through a users directory for this file when it does not have access.
Here is example code:
var fullfile;
app.post('/upload', function (request, response) {
var fstream;
request.pipe(request.busboy);
request.busboy.on('file', function (fieldname, file, filename) {
console.log('Uploading: ' + filename);
fstream = fs.createWriteStream('./storedFiles/' + filename);
file.pipe(fstream);
fstream.on('close', function () {
response.redirect('success');
console.log('Uploaded to ' + fstream.path);
fullfile=fstream.name;
var obj = xlsx.parse(__dirname + fullfile);
console.log(obj);
});
});
});
This produces the error:
return binding.open(pathmodule._makelong(path) stringtoflags(flags) mode)
error: ENOENT, no such file or directory 'C\Users(file path on my local machine here)
Can anyone point out a way of doing this that I am missing? it has to do with fs methods I feel.
Thank you
First of all, don't use the filename that user provided when saving the file - you will get duplicates and it could be a security risk (in general, never trust user provided data). Just use your own value instead - it is more standard to use your own naming convention to prevent duplicates or to use a tmp file provided by the OS.
To solve your issue, try:
Requiring path at the top of your file:
var path = require('path');
and changing the value of fullfile to:
fullfile = path.join(__dirname,fstream.path)
then pass fullfile to xlsx.parse:
var obj = xlsx.parse(fullfile);