Import TXT to be an array - javascript

before, i already search the question asked in SOF before i deciding to ask,
like here
or here
but none of it solve my problem..
ok so heres my code :
const file = './PAGE1.txt';
const fs = require('fs');
fs.readFile(file,'utf-8', (e,d)=>{
let textByLine = d.split('\n'); //make it an array
let hasil=textByLine[2];
});
the page1.txt is like
Aa
Ab
Ac
so then i try
console.log(hasil)
it succeeded showing "Ac" on the console.
but when i do
console.log(hasil + " Test")
it shows up "Test"
why its not "Ac Test" ?
thank you for your help.
Edit : this is solved, i just simply add '\r' :
let textByLine = d.split('\r\n'); //make it an array
and now the console show "Ac Test".
now i wanna ask what does this "\r" function?
why i need it to solve my question..
thankyou again :)

const fs = require('fs'); // file system package
const rl = require('readline'); // readline package helps reading data line by line
// create an interface to read the file
const rI = rl.createInterface({
input: fs.createReadStream('/path/to/file') // your path to file
});
rI.on('line', line => {
console.log(line); // your line
});
You can simply use this to log line by line data. But in real world you will use with Promise e.g
const getFileContent = path =>
new Promise((resolve, reject) => {
const lines = [],
input = fs.createReadStream(path);
// handle if cann't create a read strem e.g if file not found
input.on('error', e => {
reject(e);
});
// create a readline interface so that we can read line by line
const rI = rl.createInterface({
input
});
// listen to event line when a line is read
rI.on('line', line => {
lines.push(line);
})
// if file read done
.on('close', () => {
resolve(lines);
})
// if any errors occur while reading line
.on('error', e => {
reject(e);
});
});
and you will use it like this.
getFileContent('YOUR_PATH_TO_FILE')
.then(lines => {
console.log(lines);
})
.catch(e => {
console.log(e);
});
Hope this will help you :)

Edit : this is solved, i just simply add '\r' :
let textByLine = d.split('\r\n'); //make it an array
and now the console show "Ac Test"
now i wanna ask what does this "\r" function?
why i need it to solve my question..
thankyou again :)

Related

How can I optimize my JavaScript code to handle large log files (over 1 GB)? [duplicate]

I need to do some parsing of large (5-10 Gb)logfiles in Javascript/Node.js (I'm using Cube).
The logline looks something like:
10:00:43.343423 I'm a friendly log message. There are 5 cats, and 7 dogs. We are in state "SUCCESS".
We need to read each line, do some parsing (e.g. strip out 5, 7 and SUCCESS), then pump this data into Cube (https://github.com/square/cube) using their JS client.
Firstly, what is the canonical way in Node to read in a file, line by line?
It seems to be fairly common question online:
http://www.quora.com/What-is-the-best-way-to-read-a-file-line-by-line-in-node-js
Read a file one line at a time in node.js?
A lot of the answers seem to point to a bunch of third-party modules:
https://github.com/nickewing/line-reader
https://github.com/jahewson/node-byline
https://github.com/pkrumins/node-lazy
https://github.com/Gagle/Node-BufferedReader
However, this seems like a fairly basic task - surely, there's a simple way within the stdlib to read in a textfile, line-by-line?
Secondly, I then need to process each line (e.g. convert the timestamp into a Date object, and extract useful fields).
What's the best way to do this, maximising throughput? Is there some way that won't block on either reading in each line, or on sending it to Cube?
Thirdly - I'm guessing using string splits, and the JS equivalent of contains (IndexOf != -1?) will be a lot faster than regexes? Has anybody had much experience in parsing massive amounts of text data in Node.js?
I searched for a solution to parse very large files (gbs) line by line using a stream. All the third-party libraries and examples did not suit my needs since they processed the files not line by line (like 1 , 2 , 3 , 4 ..) or read the entire file to memory
The following solution can parse very large files, line by line using stream & pipe. For testing I used a 2.1 gb file with 17.000.000 records. Ram usage did not exceed 60 mb.
First, install the event-stream package:
npm install event-stream
Then:
var fs = require('fs')
, es = require('event-stream');
var lineNr = 0;
var s = fs.createReadStream('very-large-file.csv')
.pipe(es.split())
.pipe(es.mapSync(function(line){
// pause the readstream
s.pause();
lineNr += 1;
// process line here and call s.resume() when rdy
// function below was for logging memory usage
logMemoryUsage(lineNr);
// resume the readstream, possibly from a callback
s.resume();
})
.on('error', function(err){
console.log('Error while reading file.', err);
})
.on('end', function(){
console.log('Read entire file.')
})
);
Please let me know how it goes!
You can use the inbuilt readline package, see docs here. I use stream to create a new output stream.
var fs = require('fs'),
readline = require('readline'),
stream = require('stream');
var instream = fs.createReadStream('/path/to/file');
var outstream = new stream;
outstream.readable = true;
outstream.writable = true;
var rl = readline.createInterface({
input: instream,
output: outstream,
terminal: false
});
rl.on('line', function(line) {
console.log(line);
//Do your stuff ...
//Then write to output stream
rl.write(line);
});
Large files will take some time to process. Do tell if it works.
I really liked #gerard answer which is actually deserves to be the correct answer here. I made some improvements:
Code is in a class (modular)
Parsing is included
Ability to resume is given to the outside in case there is an asynchronous job is chained to reading the CSV like inserting to DB, or a HTTP request
Reading in chunks/batche sizes that
user can declare. I took care of encoding in the stream too, in case
you have files in different encoding.
Here's the code:
'use strict'
const fs = require('fs'),
util = require('util'),
stream = require('stream'),
es = require('event-stream'),
parse = require("csv-parse"),
iconv = require('iconv-lite');
class CSVReader {
constructor(filename, batchSize, columns) {
this.reader = fs.createReadStream(filename).pipe(iconv.decodeStream('utf8'))
this.batchSize = batchSize || 1000
this.lineNumber = 0
this.data = []
this.parseOptions = {delimiter: '\t', columns: true, escape: '/', relax: true}
}
read(callback) {
this.reader
.pipe(es.split())
.pipe(es.mapSync(line => {
++this.lineNumber
parse(line, this.parseOptions, (err, d) => {
this.data.push(d[0])
})
if (this.lineNumber % this.batchSize === 0) {
callback(this.data)
}
})
.on('error', function(){
console.log('Error while reading file.')
})
.on('end', function(){
console.log('Read entirefile.')
}))
}
continue () {
this.data = []
this.reader.resume()
}
}
module.exports = CSVReader
So basically, here is how you will use it:
let reader = CSVReader('path_to_file.csv')
reader.read(() => reader.continue())
I tested this with a 35GB CSV file and it worked for me and that's why I chose to build it on #gerard's answer, feedbacks are welcomed.
I used https://www.npmjs.com/package/line-by-line for reading more than 1 000 000 lines from a text file. In this case, an occupied capacity of RAM was about 50-60 megabyte.
const LineByLineReader = require('line-by-line'),
lr = new LineByLineReader('big_file.txt');
lr.on('error', function (err) {
// 'err' contains error object
});
lr.on('line', function (line) {
// pause emitting of lines...
lr.pause();
// ...do your asynchronous line processing..
setTimeout(function () {
// ...and continue emitting lines.
lr.resume();
}, 100);
});
lr.on('end', function () {
// All lines are read, file is closed now.
});
The Node.js Documentation offers a very elegant example using the Readline module.
Example: Read File Stream Line-by-Line
const { once } = require('node:events');
const fs = require('fs');
const readline = require('readline');
const rl = readline.createInterface({
input: fs.createReadStream('sample.txt'),
crlfDelay: Infinity
});
rl.on('line', (line) => {
console.log(`Line from file: ${line}`);
});
await once(rl, 'close');
Note: we use the crlfDelay option to recognize all instances of CR LF ('\r\n') as a single line break.
Apart from read the big file line by line, you also can read it chunk by chunk. For more refer to this article
var offset = 0;
var chunkSize = 2048;
var chunkBuffer = new Buffer(chunkSize);
var fp = fs.openSync('filepath', 'r');
var bytesRead = 0;
while(bytesRead = fs.readSync(fp, chunkBuffer, 0, chunkSize, offset)) {
offset += bytesRead;
var str = chunkBuffer.slice(0, bytesRead).toString();
var arr = str.split('\n');
if(bytesRead = chunkSize) {
// the last item of the arr may be not a full line, leave it to the next chunk
offset -= arr.pop().length;
}
lines.push(arr);
}
console.log(lines);
I had the same problem yet. After comparing several modules that seem to have this feature, I decided to do it myself, it's simpler than I thought.
gist: https://gist.github.com/deemstone/8279565
var fetchBlock = lineByline(filepath, onEnd);
fetchBlock(function(lines, start){ ... }); //lines{array} start{int} lines[0] No.
It cover the file opened in a closure, that fetchBlock() returned will fetch a block from the file, end split to array (will deal the segment from last fetch).
I've set the block size to 1024 for each read operation. This may have bugs, but code logic is obvious, try it yourself.
Reading / Writing files using stream with the native nodejs modules (fs, readline):
const fs = require('fs');
const readline = require('readline');
const rl = readline.createInterface({
input: fs.createReadStream('input.json'),
output: fs.createWriteStream('output.json')
});
rl.on('line', function(line) {
console.log(line);
// Do any 'line' processing if you want and then write to the output file
this.output.write(`${line}\n`);
});
rl.on('close', function() {
console.log(`Created "${this.output.path}"`);
});
Based on this questions answer I implemented a class you can use to read a file synchronously line-by-line with fs.readSync(). You can make this "pause" and "resume" by using a Q promise (jQuery seems to require a DOM so cant run it with nodejs):
var fs = require('fs');
var Q = require('q');
var lr = new LineReader(filenameToLoad);
lr.open();
var promise;
workOnLine = function () {
var line = lr.readNextLine();
promise = complexLineTransformation(line).then(
function() {console.log('ok');workOnLine();},
function() {console.log('error');}
);
}
workOnLine();
complexLineTransformation = function (line) {
var deferred = Q.defer();
// ... async call goes here, in callback: deferred.resolve('done ok'); or deferred.reject(new Error(error));
return deferred.promise;
}
function LineReader (filename) {
this.moreLinesAvailable = true;
this.fd = undefined;
this.bufferSize = 1024*1024;
this.buffer = new Buffer(this.bufferSize);
this.leftOver = '';
this.read = undefined;
this.idxStart = undefined;
this.idx = undefined;
this.lineNumber = 0;
this._bundleOfLines = [];
this.open = function() {
this.fd = fs.openSync(filename, 'r');
};
this.readNextLine = function () {
if (this._bundleOfLines.length === 0) {
this._readNextBundleOfLines();
}
this.lineNumber++;
var lineToReturn = this._bundleOfLines[0];
this._bundleOfLines.splice(0, 1); // remove first element (pos, howmany)
return lineToReturn;
};
this.getLineNumber = function() {
return this.lineNumber;
};
this._readNextBundleOfLines = function() {
var line = "";
while ((this.read = fs.readSync(this.fd, this.buffer, 0, this.bufferSize, null)) !== 0) { // read next bytes until end of file
this.leftOver += this.buffer.toString('utf8', 0, this.read); // append to leftOver
this.idxStart = 0
while ((this.idx = this.leftOver.indexOf("\n", this.idxStart)) !== -1) { // as long as there is a newline-char in leftOver
line = this.leftOver.substring(this.idxStart, this.idx);
this._bundleOfLines.push(line);
this.idxStart = this.idx + 1;
}
this.leftOver = this.leftOver.substring(this.idxStart);
if (line !== "") {
break;
}
}
};
}
node-byline uses streams, so i would prefer that one for your huge files.
for your date-conversions i would use moment.js.
for maximising your throughput you could think about using a software-cluster. there are some nice-modules which wrap the node-native cluster-module quite well. i like cluster-master from isaacs. e.g. you could create a cluster of x workers which all compute a file.
for benchmarking splits vs regexes use benchmark.js. i havent tested it until now. benchmark.js is available as a node-module
import * as csv from 'fast-csv';
import * as fs from 'fs';
interface Row {
[s: string]: string;
}
type RowCallBack = (data: Row, index: number) => object;
export class CSVReader {
protected file: string;
protected csvOptions = {
delimiter: ',',
headers: true,
ignoreEmpty: true,
trim: true
};
constructor(file: string, csvOptions = {}) {
if (!fs.existsSync(file)) {
throw new Error(`File ${file} not found.`);
}
this.file = file;
this.csvOptions = Object.assign({}, this.csvOptions, csvOptions);
}
public read(callback: RowCallBack): Promise < Array < object >> {
return new Promise < Array < object >> (resolve => {
const readStream = fs.createReadStream(this.file);
const results: Array < any > = [];
let index = 0;
const csvStream = csv.parse(this.csvOptions).on('data', async (data: Row) => {
index++;
results.push(await callback(data, index));
}).on('error', (err: Error) => {
console.error(err.message);
throw err;
}).on('end', () => {
resolve(results);
});
readStream.pipe(csvStream);
});
}
}
import { CSVReader } from '../src/helpers/CSVReader';
(async () => {
const reader = new CSVReader('./database/migrations/csv/users.csv');
const users = await reader.read(async data => {
return {
username: data.username,
name: data.name,
email: data.email,
cellPhone: data.cell_phone,
homePhone: data.home_phone,
roleId: data.role_id,
description: data.description,
state: data.state,
};
});
console.log(users);
})();
I have made a node module to read large file asynchronously text or JSON.
Tested on large files.
var fs = require('fs')
, util = require('util')
, stream = require('stream')
, es = require('event-stream');
module.exports = FileReader;
function FileReader(){
}
FileReader.prototype.read = function(pathToFile, callback){
var returnTxt = '';
var s = fs.createReadStream(pathToFile)
.pipe(es.split())
.pipe(es.mapSync(function(line){
// pause the readstream
s.pause();
//console.log('reading line: '+line);
returnTxt += line;
// resume the readstream, possibly from a callback
s.resume();
})
.on('error', function(){
console.log('Error while reading file.');
})
.on('end', function(){
console.log('Read entire file.');
callback(returnTxt);
})
);
};
FileReader.prototype.readJSON = function(pathToFile, callback){
try{
this.read(pathToFile, function(txt){callback(JSON.parse(txt));});
}
catch(err){
throw new Error('json file is not valid! '+err.stack);
}
};
Just save the file as file-reader.js, and use it like this:
var FileReader = require('./file-reader');
var fileReader = new FileReader();
fileReader.readJSON(__dirname + '/largeFile.json', function(jsonObj){/*callback logic here*/});

Nodejs - removing substring from a huge file

I need to remove a substring (that appears only in specific known lines of the file) from a file.
there are simple solutions of reading all file data to a string, removing the substring, and then write the fixed data to the file.
here is a code I found in here:
Node js - Remove string from text file
var data = fs.readFileSync('banlist.txt', 'utf-8');
var newValue = data.replace(new RegEx("STRING_TO_REMOVE"), '');
fs.writeFileSync('banlist.txt', newValue, 'utf-8');
My problem is, that the file is huge - up to billion lines of logs, so I can't read all content to the memory.
Why not a simple transform stream and replace()? replace can take a callback as second parameter i.e. .replace(/bad1|bad2|bad3/g, filterWords) in case you need to replace words rather than remove them completely.
const fs = require("fs")
const { pipeline, Transform } = require("stream")
const { join } = require("path")
const readFile = fs.createReadStream("./words.txt")
const writeFile = fs.createWriteStream(
join(__dirname, "words-filtered.txt"),
"utf8"
)
const transformFile = new Transform({
transform(chunk, enc, next) {
let c = chunk.toString().replace(/bad/g, "replaced")
this.push(c)
next()
},
})
pipeline(readFile, transformFile, writeFile, (err) => {
if (err) {
console.log(err.message)
}
})
https://nodejs.org/api/fs.html#fs_fs_read_fd_buffer_offset_length_position_callback
Dont read the whole file at once... read a small buffered piece of it.. and look for your input with that buffered piece.... then increment your buffer starting position and do it again.... would recommend having each buffer start not at the end of the previous buffer... but overlap by at least the expected size of the data being sought so that you dont run into half of your data being at end of one buffer and other half at beginning of the other
You could use a file read stream. However, you would have to find a way to detect if the read data only contains part of the result.
What you probably want to do is use streams so that you are writing after partial reads. this example could probably work for you. you need to copy over the output text file ".tmp" over the original to get the same behavior in your question. It works by reading a chunk and then looking to see if you've come across a new line. then it processes that line, writes it, then removes it from the buffer. This should help with your memory problem.
var fs = require("fs");
var readStream = fs.createReadStream("./BFFile.txt", { encoding: "utf-8" });
var writeStream = fs.createWriteStream("./BFFile.txt.tmp");
const STRING_TO_REMOVE = "badword";
var buffer = ""
readStream.on("data", (chunk) => {
buffer += chunk;
var indexOfNewLine = buffer.search("\n");
while (indexOfNewLine !== -1) {
var line = buffer.substring(0, indexOfNewLine + 1);
buffer = buffer.substring(indexOfNewLine + 1, buffer.length);
line = line.replace(new RegExp(STRING_TO_REMOVE), "");
writeStream.write(line);
indexOfNewLine = buffer.search("\n");
}
})
readStream.on("end", () => {
buffer = buffer.replace(new RegExp(STRING_TO_REMOVE), "");
writeStream.write(buffer);
writeStream.close();
})
There are a few assumptions with this solution such as the data being UTF-8, there only being 1 bad word potentially per line, every line having some text (I didn't test for that), and that every line ends with new line and not some other line ending.
Heres the docs for streams in Node
another thought I had was to use pipe and a transform stream but that seems like over kill.
You can use this code to do it. I'm using fs stream. it's created for read huge files in small memory by chunks. docs
const fs = require('fs');
const readStream = fs.createReadStream('./XXXXX');
const writeStream = fs.createWriteStream('./XXXXXXX');
readStream.on('data', (chunk) => {
const data = chunk.toString().replace('STRING_TO_REMOVE', 'XXXXXX');
writeStream.write(data);
});
readStream.on('end', () => {
writeStream.close();
});

No Output fast-csv writeToPath

I am writing a script which at its core parses a .csv file for certain columns storing them in an array and then writes the contents to another .csv file. I am able to parse the file using fast-csv and have confirmed in the terminal that my array is in the correct format. However, when I attempt to write this array using the fast-csv to a .csv file, the contents never appear in the file and no errors are thrown. I have validated that the array is being passed all the way through to the callback. In addition I have gone so far as to replace that variable in the writeToPath function with a simple array and still no luck. Any assistance would be appreciated.
const processFile = (fileName, file, cb) => {
let writeData = []
let tempArray = []
csv.fromPath(basePath + file, {ignoreEmpty: false, headers: false})
.on("data", function(data){
if (data[0] != ''){
[startDate, endDate] = fileName
tempArray[0] = data[0]
tempArray[1] = data[1]
tempArray[2] = data[2]
tempArray[3] = data[3]
tempArray[4] = data[4]
tempArray[5] = data[8]
tempArray[6] = ""
tempArray[7] = ""
tempArray[8] = ""
tempArray[9] = startDate
tempArray[10] = endDate
writeData[i] = tempArray
writeData.shift()
tempArray = []
i++
}
})
.on("end", () => {
console.log('end')
}).on('finish', (() => {
cb(writeData)
}));
}
processFile(fileName, file, (csvData) => {
console.log(csvData)
csv.writeToPath('./working-files/top.csv', {headers: false}, csvData).on("finish", () => {
console.log('done')
})
Unfortunately, without any context to the dataset you are using, there is only so much I can suggest. The variables needed to debug this properly would be: the file, the file names used and whatever 'i' is. If you can update this then I'll be happy to take another look.
I would suggest going back and logging the variables after each step that would modify them, hopefully then you'll get a better picture as what is going wrong.
I understand this isn't a complete answer and it will probably get removed but I don't have the 50 needed reputation to make a comment.

Multiple pipes on writeStream open event gives undefined

I am downloading a file from a URL and then I want to write the metadata for that file to the destination stream only after I make sure that the destination file is created(i.e. after fs.createWriteStream(path) is successful).So I have used the "open" event of writable stream to proceed further. However, this code gives me the error exactly on the second pipe:
Cannot call method 'pipe' on undefined
There is more code beyond this which uses the hashes that are calculated here. But somehow I am stuck with this error at the moment. I have been struggling on this for quite a while. Any help/pointers are very much appreciated.
Also I tried to run the example
var fs = require('fs');
var digestStream = require('digest-stream')
callHandle();
function callHandle(){
var stream = request.get(url)
var result = {}
handle(reader, result)
}
function handle(reader,metadata){
const writer = fs.createWriteStream('pathToFile');
writer.on('open', function(){
reader.pipe(
digestStream('sha1', 'hex', function(digest, length) {
result.sha1 = digest;
result.size = length;
}))
.pipe(
digestStream('md5', 'hex', function(digest) {
result.md5 = digest;
})
).pipe(writer)
})
}

How to translate this Python code to Node.js

I got a very nice answer on here about how to clear a line / delete a line in a file without having to truncate the file or replace the file with a new version of the file, here's the Python code:
#!/usr/bin/env python
import re,os,sys
logfile = sys.argv[1]
regex = sys.argv[2]
pattern = re.compile(regex)
with open(logfile,"r+") as f:
while True:
old_offset = f.tell()
l = f.readline()
if not l:
break
if pattern.search(l):
# match: blank the line
new_offset = f.tell()
if old_offset > len(os.linesep):
old_offset-=len(os.linesep)
f.seek(old_offset)
f.write(" "*(new_offset-old_offset-len(os.linesep)))
this script can be called like:
./clear-line.py <file> <pattern>
For educational purposes, I am trying to figure out if I can write this in Node.js. I can certainly read a file with Node.js line-by-line. But I am not sure if Node.js has the equivalent calls for tell/seek in this case.
the equivalent for write is surely
https://nodejs.org/api/fs.html#fs_fs_write_fd_buffer_offset_length_position_callback
Here is my attempt
#!/usr/bin/env node
const readline = require('readline');
const fs = require('fs');
const file = process.argv[2];
const rgx = process.argv[3];
const fd = fs.openSync(file, 'r+');
const rl = readline.createInterface({
input: fs.createReadStream(null, {fd: fd})
});
let position = 0;
const onLine = line => {
position += line.length;
if (String(line).match(rgx)) {
let len = line.length;
rl.close();
rl.removeListener('line', onLine);
// output the line that will be replaced/removed
process.stdout.write(line);
fs.write(fd, new Array(len + 1).join(' '), position, 'utf8', err => {
if (err) {
process.stderr.write(err.stack || err);
process.exit(1);
}
else {
process.exit(0);
}
});
}
};
rl.on('line', onLine);
It's not quite right - I don't think I am calculating the offset/position correctly. Perhaps someone who know both Python and Node can help me out. I am not very familiar with calculating position/offset in files, especially in terms of buffers.
Here is the data in a text file that I am working with. All I want to do is read the first line that is not empty, and then remove that line from the file and write that line to stdout.
This could really any non-whitespace data, but here is the JSON that I am working with:
{"dateCreated":"2016-12-26T09:52:03.250Z","pid":5371,"count":0,"uid":"7133d123-e6b8-4109-902b-7a90ade7c655","isRead":false,"line":"foo bar baz"}
{"dateCreated":"2016-12-26T09:52:03.290Z","pid":5371,"count":1,"uid":"e881b0a9-8c28-42bb-8a9d-8109587777d0","isRead":false,"line":"foo bar baz"}
{"dateCreated":"2016-12-26T09:52:03.390Z","pid":5371,"count":2,"uid":"065e51ff-14b8-4454-9ae5-b85152cfcb64","isRead":false,"line":"foo bar baz"}
{"dateCreated":"2016-12-26T09:52:03.491Z","pid":5371,"count":3,"uid":"5af80a95-ff9d-4252-9c4e-0e421fd9320f","isRead":false,"line":"foo bar baz"}
{"dateCreated":"2016-12-26T09:52:03.595Z","pid":5371,"count":4,"uid":"961e578f-288b-413c-b933-b791f833c037","isRead":false,"line":"foo bar baz"}
{"dateCreated":"2016-12-26T09:52:03.696Z","pid":5371,"count":5,"uid":"a65cbf78-2ea1-4c3a-9beb-b4bf56e83a6b","isRead":false,"line":"foo bar baz"}
{"dateCreated":"2016-12-26T09:52:03.799Z","pid":5371,"count":6,"uid":"d411e917-ad25-455f-9449-ae4d31c7b1ad","isRead":false,"line":"foo bar baz"}
{"dateCreated":"2016-12-26T09:52:03.898Z","pid":5371,"count":7,"uid":"46f8841d-c86c-43f2-b440-8ab7feea7527","isRead":false,"line":"foo bar baz"}
{"dateCreated":"2016-12-26T09:52:04.002Z","pid":5371,"count":8,"uid":"81b5ce7e-2f4d-4acb-884c-442c5ac4490f","isRead":false,"line":"foo bar baz"}
{"dateCreated":"2016-12-26T09:52:04.101Z","pid":5371,"count":9,"uid":"120ff45d-74e7-464e-abd5-94c41e3cd089","isRead":false,"line":"foo bar baz"}
You should take into consideration the newline character at the end of each line, that is not included in the 'line' you're getting via the readline module. That is, you should update position to position += (line.length + 1), and then when writing, just use position (without the -1).
Ok, I think I got it, but if someone has any beef with this please feel free to critique. It's close, but it needs some fine tuning I think, there seems to be an off-by-one error or something like that.
#!/usr/bin/env node
const readline = require('readline');
const fs = require('fs');
const file = process.argv[2];
const rgx = new RegExp(process.argv[3]);
const fd = fs.openSync(file, 'r+');
const rl = readline.createInterface({
input: fs.createReadStream(null, {fd: fd})
});
let position = 0;
const onLine = line => {
if (String(line).match(rgx)) {
let len = line.length;
rl.close();
rl.removeListener('line', onLine);
// output the line that will be replaced/removed
process.stdout.write(line + '\n');
fs.write(fd, new Array(len + 1).join(' '), position, 'utf8',
(err, written, string) => {
if (err) {
process.stderr.write(err.stack || err);
return process.exit(1);
}
else {
process.exit(0);
}
});
}
position += (line.length + 1); // 1 is length of \n character
};
rl.on('line', onLine);

Categories

Resources