I need to do some parsing of large (5-10 Gb)logfiles in Javascript/Node.js (I'm using Cube).
The logline looks something like:
10:00:43.343423 I'm a friendly log message. There are 5 cats, and 7 dogs. We are in state "SUCCESS".
We need to read each line, do some parsing (e.g. strip out 5, 7 and SUCCESS), then pump this data into Cube (https://github.com/square/cube) using their JS client.
Firstly, what is the canonical way in Node to read in a file, line by line?
It seems to be fairly common question online:
http://www.quora.com/What-is-the-best-way-to-read-a-file-line-by-line-in-node-js
Read a file one line at a time in node.js?
A lot of the answers seem to point to a bunch of third-party modules:
https://github.com/nickewing/line-reader
https://github.com/jahewson/node-byline
https://github.com/pkrumins/node-lazy
https://github.com/Gagle/Node-BufferedReader
However, this seems like a fairly basic task - surely, there's a simple way within the stdlib to read in a textfile, line-by-line?
Secondly, I then need to process each line (e.g. convert the timestamp into a Date object, and extract useful fields).
What's the best way to do this, maximising throughput? Is there some way that won't block on either reading in each line, or on sending it to Cube?
Thirdly - I'm guessing using string splits, and the JS equivalent of contains (IndexOf != -1?) will be a lot faster than regexes? Has anybody had much experience in parsing massive amounts of text data in Node.js?
I searched for a solution to parse very large files (gbs) line by line using a stream. All the third-party libraries and examples did not suit my needs since they processed the files not line by line (like 1 , 2 , 3 , 4 ..) or read the entire file to memory
The following solution can parse very large files, line by line using stream & pipe. For testing I used a 2.1 gb file with 17.000.000 records. Ram usage did not exceed 60 mb.
First, install the event-stream package:
npm install event-stream
Then:
var fs = require('fs')
, es = require('event-stream');
var lineNr = 0;
var s = fs.createReadStream('very-large-file.csv')
.pipe(es.split())
.pipe(es.mapSync(function(line){
// pause the readstream
s.pause();
lineNr += 1;
// process line here and call s.resume() when rdy
// function below was for logging memory usage
logMemoryUsage(lineNr);
// resume the readstream, possibly from a callback
s.resume();
})
.on('error', function(err){
console.log('Error while reading file.', err);
})
.on('end', function(){
console.log('Read entire file.')
})
);
Please let me know how it goes!
You can use the inbuilt readline package, see docs here. I use stream to create a new output stream.
var fs = require('fs'),
readline = require('readline'),
stream = require('stream');
var instream = fs.createReadStream('/path/to/file');
var outstream = new stream;
outstream.readable = true;
outstream.writable = true;
var rl = readline.createInterface({
input: instream,
output: outstream,
terminal: false
});
rl.on('line', function(line) {
console.log(line);
//Do your stuff ...
//Then write to output stream
rl.write(line);
});
Large files will take some time to process. Do tell if it works.
I really liked #gerard answer which is actually deserves to be the correct answer here. I made some improvements:
Code is in a class (modular)
Parsing is included
Ability to resume is given to the outside in case there is an asynchronous job is chained to reading the CSV like inserting to DB, or a HTTP request
Reading in chunks/batche sizes that
user can declare. I took care of encoding in the stream too, in case
you have files in different encoding.
Here's the code:
'use strict'
const fs = require('fs'),
util = require('util'),
stream = require('stream'),
es = require('event-stream'),
parse = require("csv-parse"),
iconv = require('iconv-lite');
class CSVReader {
constructor(filename, batchSize, columns) {
this.reader = fs.createReadStream(filename).pipe(iconv.decodeStream('utf8'))
this.batchSize = batchSize || 1000
this.lineNumber = 0
this.data = []
this.parseOptions = {delimiter: '\t', columns: true, escape: '/', relax: true}
}
read(callback) {
this.reader
.pipe(es.split())
.pipe(es.mapSync(line => {
++this.lineNumber
parse(line, this.parseOptions, (err, d) => {
this.data.push(d[0])
})
if (this.lineNumber % this.batchSize === 0) {
callback(this.data)
}
})
.on('error', function(){
console.log('Error while reading file.')
})
.on('end', function(){
console.log('Read entirefile.')
}))
}
continue () {
this.data = []
this.reader.resume()
}
}
module.exports = CSVReader
So basically, here is how you will use it:
let reader = CSVReader('path_to_file.csv')
reader.read(() => reader.continue())
I tested this with a 35GB CSV file and it worked for me and that's why I chose to build it on #gerard's answer, feedbacks are welcomed.
I used https://www.npmjs.com/package/line-by-line for reading more than 1 000 000 lines from a text file. In this case, an occupied capacity of RAM was about 50-60 megabyte.
const LineByLineReader = require('line-by-line'),
lr = new LineByLineReader('big_file.txt');
lr.on('error', function (err) {
// 'err' contains error object
});
lr.on('line', function (line) {
// pause emitting of lines...
lr.pause();
// ...do your asynchronous line processing..
setTimeout(function () {
// ...and continue emitting lines.
lr.resume();
}, 100);
});
lr.on('end', function () {
// All lines are read, file is closed now.
});
The Node.js Documentation offers a very elegant example using the Readline module.
Example: Read File Stream Line-by-Line
const { once } = require('node:events');
const fs = require('fs');
const readline = require('readline');
const rl = readline.createInterface({
input: fs.createReadStream('sample.txt'),
crlfDelay: Infinity
});
rl.on('line', (line) => {
console.log(`Line from file: ${line}`);
});
await once(rl, 'close');
Note: we use the crlfDelay option to recognize all instances of CR LF ('\r\n') as a single line break.
Apart from read the big file line by line, you also can read it chunk by chunk. For more refer to this article
var offset = 0;
var chunkSize = 2048;
var chunkBuffer = new Buffer(chunkSize);
var fp = fs.openSync('filepath', 'r');
var bytesRead = 0;
while(bytesRead = fs.readSync(fp, chunkBuffer, 0, chunkSize, offset)) {
offset += bytesRead;
var str = chunkBuffer.slice(0, bytesRead).toString();
var arr = str.split('\n');
if(bytesRead = chunkSize) {
// the last item of the arr may be not a full line, leave it to the next chunk
offset -= arr.pop().length;
}
lines.push(arr);
}
console.log(lines);
I had the same problem yet. After comparing several modules that seem to have this feature, I decided to do it myself, it's simpler than I thought.
gist: https://gist.github.com/deemstone/8279565
var fetchBlock = lineByline(filepath, onEnd);
fetchBlock(function(lines, start){ ... }); //lines{array} start{int} lines[0] No.
It cover the file opened in a closure, that fetchBlock() returned will fetch a block from the file, end split to array (will deal the segment from last fetch).
I've set the block size to 1024 for each read operation. This may have bugs, but code logic is obvious, try it yourself.
Reading / Writing files using stream with the native nodejs modules (fs, readline):
const fs = require('fs');
const readline = require('readline');
const rl = readline.createInterface({
input: fs.createReadStream('input.json'),
output: fs.createWriteStream('output.json')
});
rl.on('line', function(line) {
console.log(line);
// Do any 'line' processing if you want and then write to the output file
this.output.write(`${line}\n`);
});
rl.on('close', function() {
console.log(`Created "${this.output.path}"`);
});
Based on this questions answer I implemented a class you can use to read a file synchronously line-by-line with fs.readSync(). You can make this "pause" and "resume" by using a Q promise (jQuery seems to require a DOM so cant run it with nodejs):
var fs = require('fs');
var Q = require('q');
var lr = new LineReader(filenameToLoad);
lr.open();
var promise;
workOnLine = function () {
var line = lr.readNextLine();
promise = complexLineTransformation(line).then(
function() {console.log('ok');workOnLine();},
function() {console.log('error');}
);
}
workOnLine();
complexLineTransformation = function (line) {
var deferred = Q.defer();
// ... async call goes here, in callback: deferred.resolve('done ok'); or deferred.reject(new Error(error));
return deferred.promise;
}
function LineReader (filename) {
this.moreLinesAvailable = true;
this.fd = undefined;
this.bufferSize = 1024*1024;
this.buffer = new Buffer(this.bufferSize);
this.leftOver = '';
this.read = undefined;
this.idxStart = undefined;
this.idx = undefined;
this.lineNumber = 0;
this._bundleOfLines = [];
this.open = function() {
this.fd = fs.openSync(filename, 'r');
};
this.readNextLine = function () {
if (this._bundleOfLines.length === 0) {
this._readNextBundleOfLines();
}
this.lineNumber++;
var lineToReturn = this._bundleOfLines[0];
this._bundleOfLines.splice(0, 1); // remove first element (pos, howmany)
return lineToReturn;
};
this.getLineNumber = function() {
return this.lineNumber;
};
this._readNextBundleOfLines = function() {
var line = "";
while ((this.read = fs.readSync(this.fd, this.buffer, 0, this.bufferSize, null)) !== 0) { // read next bytes until end of file
this.leftOver += this.buffer.toString('utf8', 0, this.read); // append to leftOver
this.idxStart = 0
while ((this.idx = this.leftOver.indexOf("\n", this.idxStart)) !== -1) { // as long as there is a newline-char in leftOver
line = this.leftOver.substring(this.idxStart, this.idx);
this._bundleOfLines.push(line);
this.idxStart = this.idx + 1;
}
this.leftOver = this.leftOver.substring(this.idxStart);
if (line !== "") {
break;
}
}
};
}
node-byline uses streams, so i would prefer that one for your huge files.
for your date-conversions i would use moment.js.
for maximising your throughput you could think about using a software-cluster. there are some nice-modules which wrap the node-native cluster-module quite well. i like cluster-master from isaacs. e.g. you could create a cluster of x workers which all compute a file.
for benchmarking splits vs regexes use benchmark.js. i havent tested it until now. benchmark.js is available as a node-module
import * as csv from 'fast-csv';
import * as fs from 'fs';
interface Row {
[s: string]: string;
}
type RowCallBack = (data: Row, index: number) => object;
export class CSVReader {
protected file: string;
protected csvOptions = {
delimiter: ',',
headers: true,
ignoreEmpty: true,
trim: true
};
constructor(file: string, csvOptions = {}) {
if (!fs.existsSync(file)) {
throw new Error(`File ${file} not found.`);
}
this.file = file;
this.csvOptions = Object.assign({}, this.csvOptions, csvOptions);
}
public read(callback: RowCallBack): Promise < Array < object >> {
return new Promise < Array < object >> (resolve => {
const readStream = fs.createReadStream(this.file);
const results: Array < any > = [];
let index = 0;
const csvStream = csv.parse(this.csvOptions).on('data', async (data: Row) => {
index++;
results.push(await callback(data, index));
}).on('error', (err: Error) => {
console.error(err.message);
throw err;
}).on('end', () => {
resolve(results);
});
readStream.pipe(csvStream);
});
}
}
import { CSVReader } from '../src/helpers/CSVReader';
(async () => {
const reader = new CSVReader('./database/migrations/csv/users.csv');
const users = await reader.read(async data => {
return {
username: data.username,
name: data.name,
email: data.email,
cellPhone: data.cell_phone,
homePhone: data.home_phone,
roleId: data.role_id,
description: data.description,
state: data.state,
};
});
console.log(users);
})();
I have made a node module to read large file asynchronously text or JSON.
Tested on large files.
var fs = require('fs')
, util = require('util')
, stream = require('stream')
, es = require('event-stream');
module.exports = FileReader;
function FileReader(){
}
FileReader.prototype.read = function(pathToFile, callback){
var returnTxt = '';
var s = fs.createReadStream(pathToFile)
.pipe(es.split())
.pipe(es.mapSync(function(line){
// pause the readstream
s.pause();
//console.log('reading line: '+line);
returnTxt += line;
// resume the readstream, possibly from a callback
s.resume();
})
.on('error', function(){
console.log('Error while reading file.');
})
.on('end', function(){
console.log('Read entire file.');
callback(returnTxt);
})
);
};
FileReader.prototype.readJSON = function(pathToFile, callback){
try{
this.read(pathToFile, function(txt){callback(JSON.parse(txt));});
}
catch(err){
throw new Error('json file is not valid! '+err.stack);
}
};
Just save the file as file-reader.js, and use it like this:
var FileReader = require('./file-reader');
var fileReader = new FileReader();
fileReader.readJSON(__dirname + '/largeFile.json', function(jsonObj){/*callback logic here*/});
I have two codes one for reading shared file using smb2:
// load the library
var SMB2 = require('smb2');
// create an SMB2 instance
var smb2Client = new SMB2({
share:'\\\\192.168.0.111\\folder'
, domain:'WORKGROUP'
, username:'username'
, password:'password'
});
// read the file
smb2Client.readFile('path\\to\\the\\file.txt', "utf-8", function(error, data){
if(error) throw error;
console.log(data);
});
And the other one for reading the last 20 lines of a local file using read-last-line:
// load the library
const readLastLine = require('read-last-line');
// read the file
readLastLine.read('path\\to\\the\\file.txt', 20).then(function (lines) {
console.log(lines)
}).catch(function (err) {
console.log(err.message);
});
I didn't know how to combine the two of them.
Do you have any suggestions.
Thanks.
If smb2 and read-last-line both supported streams, it would be easy. According to their documentation at least, neither does, but pv-node-smb2 has a method createReadStream.
Here is a stream transformer that processes its input line by line, but keeps only the most recent n lines and outputs them at the end. This avoids keeping huge files entirely in memory:
class Tail extends stream.Transform {
constructor(n) {
super();
this.input = new stream.PassThrough();
this.tail = [];
readline.createInterface({input: this.input, crlfDelay: Infinity})
.on("line", function(line) {
if (this.tail.length === n) this.tail.shift();
this.tail.push(line);
}.bind(this));
}
_transform(chunk, encoding, callback) {
this.input.write(chunk);
callback();
}
_flush(callback) {
callback(null, this.tail.join("\n"));
}
}
(Likely, there is already an NPM package for that.)
Now you can pipe the SMB2 read stream through that transformer:
new require("pv-node-smb2").PvNodeSmb2(...).createReadStream(...)
.pipe(new Tail(20))
.pipe(process.stdout);
I couldn't install pv-node-smb2 package, it gave me too much errors so I tried a Powershell command that works like the tail command in Linux
Get-Content \\192.168.0.111\path\to\the\file.txt -Tail 20
Which returns the last 20 lines of the file and then run that script in NodeJs
let last20lines = "";
let spawn = require("child_process").spawn,child;
child = spawn("powershell.exe",["c:\\wamp64\\www\\powershell_script.ps1"]);
child.stdout.on("data",function(data){
last20lines += data
});
child.stdin.end();
// In my programming course I am supposed to work with a specific code to complete assignments. Up until now, I have been using JavaScript as my programming language. This new assignment, however, is asking me to work with files and I believe I'll need to use another language because js doesn't work with files. Since I'm new to programming, I am not familiar with any other language. My assignment asks me to create a program code which has to:
create a .txt file
make a program which uses user-based input to calculate minimum, maximum, and average values
verify the file exists
use string functions/methods to parse the file content and add each score to an array
I have made a program which calculates and displays the values based on the user's input but its in javascript. My question is, how do I add the js program I had already made which does all the calculations to this program which asks me to open files? Will I have to start the whole thing over and do it in node.js (which seems like its the closes to js) or can I add my old js program to a new node.js program?
I tried teaching myself node.js but its really confusing; if someone can show me how to insert my previous js program into a node.js code I think I'll be good for this assignment.
// This program creates a file, adds data to the file, displays the file,
// appends more data to the file, displays the file, and then deletes the file.
// It will not run if the file already exists.
function createFile(filename)
{
var fs = require('fs')
fs.writeFile(filename, "C\tF\n", function(err)
{
if (err) return console.error(err);
});
for(var c = 0; c <= 50; c++)
{
var f = c * 9 / 5 + 32;
fs.appendFile(filename, c + "\t" + f + "\n", function (err)
{
if (err)
{
return console.error(err);
}
});
}
}
function readFile(filename)
{
var file = require('readline').createInterface(
{
input: require('fs').createReadStream(filename)
});
file.on('line', function (line)
{
console.log(line);
});
}
function appendFile(filename)
{
var fs = require('fs')
for(var c = 51; c <= 100; c++)
{
var f = c * 9 / 5 + 32;
fs.appendFile(filename, c + "\t" + f + "\n", function (err)
{
if (err)
{
return console.error(err);
}
});
}
}
function deleteFile(filename)
{
var fs = require("fs");
fs.unlink(filename, function(err)
{
if (err)
{
return console.error(err);
}
});
}
function fileExists(filename)
{
var fs = require('fs');
return fs.existsSync(filename);
}
function main()
{
var filename = "~file.txt";
if(fileExists(filename))
{
console.log("File already exists.")
}
else
{
createFile(filename);
readFile(filename);
appendFile(filename);
deleteFile(filename);
}
}
main();
Is there any way to check from Javascript what version of Cordova an app is running?
Why I ask:
We've upgraded our Cordova from 2.8 to 4.0.2 and the new Cordova JS file does not work with the old Cordova Android code. We want to force the user to upgrade their app (to in turn update their Cordova Android version), however, we need to detect that they're on the old version first.
Why device.cordova won't work:
It seems that the old Cordova JS code never initializes because it can't communicate with the new Cordova Android code. So the plugins, such as the device plugin are never loaded. We get a message in the console stating:
deviceready has not fired after 5 seconds
EDIT: now the simplest solution is this:https://stackoverflow.com/a/65476892/1243247
Manual way
I made a functional hook script which I stored at hooks/setVersion.js. I just tested it now and it works (just in Android, for iOS you just need to replicate the wwwDir)
#!/usr/bin/env node
var wwwFileToReplace = 'index.html'
var fs = require('fs')
var path = require('path')
module.exports = function (context) {
var projectRoot = context.opts.projectRoot
const wwwDir = path.join(projectRoot, 'platforms', 'android', 'app', 'src', 'main', 'assets', 'www')
var configXMLPath = 'config.xml'
loadConfigXMLDoc(configXMLPath, (rawJSON) => {
var version = rawJSON.widget.$.version
console.log('Version:', version)
var fullfilename = path.join(wwwDir, wwwFileToReplace)
if (fs.existsSync(fullfilename)) {
replaceStringInFile(fullfilename, '%%VERSION%%', version)
console.log(context.hook + ': Replaced version in file: ' + path.relative(projectRoot, fullfilename))
} else {
console.error('File does not exist: ', path.relative(projectRoot, fullfilename))
process.exit(1)
}
})
}
function loadConfigXMLDoc (filePath, callback) {
var fs = require('fs')
var xml2js = require('xml2js')
try {
var fileData = fs.readFileSync(filePath, 'ascii')
var parser = new xml2js.Parser()
parser.parseString(fileData.substring(0, fileData.length), function (err, result) {
if (err) {
console.error(err)
process.exit(1)
} else {
// console.log("config.xml as JSON", JSON.stringify(result, null, 2))
console.log("File '" + filePath + "' was successfully read.")
callback(result)
}
})
} catch (ex) {
console.log(ex)
process.exit(1)
}
}
function replaceStringInFile (filename, toReplace, replaceWith) {
var data = fs.readFileSync(filename, 'utf8')
var result = data.replace(new RegExp(toReplace, 'g'), replaceWith)
fs.writeFileSync(filename, result, 'utf8')
}
You have also to add in config.xml
<hook src="hooks/setVersion.js" type="after_prepare"/>
This script replaces the text %%VERSION%% in a file with the app version from config.xml, so you can have in your index.html file something like
<html data-appversion="%%VERSION%%">
and in your JS
const version = document.documentElement.getAttribute("data-appversion");
You can use device.cordova to get the current version of Cordova.
Appropriate documentation
I wan't to log into a file continuously, but after every 1000 lines I want to change to a new file. Now my method works like this:
var fs = require('fs');
...
var outputStream = fs.createWriteStream(fileName + '.csv');
outputStream.write(content, 'utf8', callback);
...
if (lineCounter === 1000) {
outputStream.end(function(err) {
outputStream = fs.createWriteStream(fileName2 + '.csv');
outputStream.write(content, 'utf8', callback);
});
}
In the end the files doesn't contains the last few lines. I'm open for any solution, I just need stream write into several files.
Thanks in advance!
At first I tried using the streams of Highland.js but I couldn't pause them for some reason. The script I am posting is tested and it is working. I share the original source at the end. So, I haven't actually start reading second file, but I believe it is easy now, as you have a point to proceed further after the script has reached the defined limit of lines.
var stream = require('stream'),
fs = require('fs'),
readStream = fs.createReadStream('./stream.txt', {highWaterMark: 15}),
limitStream = new stream.Transform(),
limit = 0
limitStream._transform = function(chunk, encoding, cb) {
if (++limit <= 5) {
console.log('before', limit)
return cb(null, chunk + '\n')
}
console.log('after',limit)
this.end()
cb()
}
limitStream.on('unpipe', function() { console.log('unpipe emitted from limitStream') })
limitStream.on('end', function() { console.log('end emitted from limitStream') })
readStream.pipe(limitStream).pipe(process.stdout)
Source: https://groups.google.com/forum/#!topic/nodejs/eGukJUQrOBY
After posting the answer, I found library, that can also work, but I admit that I haven't tested it. I just share it as a reference point: https://github.com/isaacs/truncating-stream