Can't Require Node Modules In WebWorker (NWJS) - javascript

I'm trying to do something I thought would be simple. I'm using nwjs (Formerly called Node-Webkit) which if you don't know basically means I'm developing a desktop app using Chromium & Node where the DOM is in the same scope as Node. I want to offload work to a webworker so that the GUI doesn't hang when I send some text off to Ivona Cloud (using ivona-node) which is a text to speech API. The audio comes back in chunks as it's generated and gets written to an MP3. ivona-node uses fs to write the mp3 to the drive. I got it working in the dom but webworkers are needed to not hang the UI. So I have two node modules I need to use in the webworker, ivona-node and fs.
The problem is that in a webworker you can't use require. So I tried packaging ivona-node and fs with browserify (There's a package called browserify-fs for this which I used) and replacing require with importScripts(). Now I'm getting var errors in the node modules.
Note: I don't think the method of native_fs_ will work for writing the mp3 to disk in chunks (The stream) as it should be and I'm getting errors in the Ivona package as well (Actually first and foremost) that I don't know how to fix. I'm including all information to reproduce this.
Here's an error I'm getting in the console: Uncaught SyntaxError: Unexpected token var VM39 ivonabundle.js:23132
Steps to reproduce in NWJS:
npm install ivona-node
npm install browserify-fs
npm install -g browserify
Now I browserified main.js for ivona-node and index.js for browserify-fs:
browserify main.js > ivonabundle.js
browserify index.js > fsbundle.js
package.json...
{
"name": "appname",
"description": "appdescr",
"title": "apptitle",
"main": "index.html",
"window":
{
"toolbar": true,
"resizable": false,
"width": 800,
"height": 500
},
"webkit":
{
"plugin": true
}
}
index.html...
<html>
<head>
<title>apptitle</title>
</head>
<body>
<p><output id="result"></output></p>
<button onclick="startWorker()">Start Worker</button>
<button onclick="stopWorker()">Stop Worker</button>
<br><br>
<script>
var w;
function startWorker() {
if(typeof(Worker) !== "undefined") {
if(typeof(w) == "undefined") {
w = new Worker("TTMP3.worker.js");
w.postMessage(['This is some text to speak.']);
}
w.onmessage = function(event) {
document.getElementById("result").innerHTML = event.data;
};
} else {
document.getElementById("result").innerHTML = "Sorry! No Web Worker support.";
}
}
function stopWorker() {
w.terminate();
w = undefined;
}
</script>
</body>
</html>
TTMP3.worker.js...
importScripts('node_modules/browserify-fs/fsbundle.js','node_modules/ivona-node/src/ivonabundle.js');
onmessage = function T2MP3(Text2Speak)
{
postMessage(Text2Speak.data[0]);
//var fs = require('fs'),
// Ivona = require('ivona-node');
var ivona = new Ivona({
accessKey: 'xxxxxxxxxxx',
secretKey: 'xxxxxxxxxxx'
});
//ivona.listVoices()
//.on('end', function(voices) {
//console.log(voices);
//});
// ivona.createVoice(text, config)
// [string] text - the text to be spoken
// [object] config (optional) - override Ivona request via 'body' value
ivona.createVoice(Text2Speak.data[0], {
body: {
voice: {
name: 'Salli',
language: 'en-US',
gender: 'Female'
}
}
}).pipe(fs.createWriteStream('text.mp3'));
postMessage("Done");
}

There are two things that I wan to point out first:
Including node modules in a web worker
In order to include the module ivona-node I had to change a little its code. When I try to browserify it I get an error: Uncaught Error: Cannot find module '/node_modules/ivona-node/src/proxy'. Checking the bundle.js generated I notice that it doesn't include the code of the module proxy which is in the file proxy.js in the src folder of ivona-node. I can load the proxy module changing this line HttpsPA = require(__dirname + '/proxy'); by this: HttpsPA = require('./proxy');. After that ivona-node can be loaded in the client side through browserify. Then I was facing another error when trying to follow the example. Turn out that this code:
ivona.createVoice(Text2Speak.data[0], {
body: {
voice: {
name: 'Salli',
language: 'en-US',
gender: 'Female'
}
}
}).pipe(fs.createWriteStream('text.mp3'));
is no longer correct, it cause the error: Uncaught Error: Cannot pipe. Not readable. The problem here is in the module http. the module browserify has wrapped many built-in modules of npm, which mean that they are available when you use require() or use their functionality. http is one of them, but as you can reference here: strem-http, It tries to match node's api and behavior as closely as possible, but some features aren't available, since browsers don't give nearly as much control over requests. Very significant is the fact of the class http.ClientRequest, this class in nodejs environment create an OutgoingMessage that produce this statement Stream.call(this) allowing the use of the method pipe in the request, but in the browserify version when you call https.request the result is a Writable Stream, this is the call inside the ClientRequest: stream.Writable.call(self). So we have explicitly a WritableStream even with this method:
Writable.prototype.pipe = function() {
this.emit('error', new Error('Cannot pipe. Not readable.'));
};
The responsible of the above error. Now we have to use a different approach to save the data from ivona-node, which leave me to the second issue.
Create a file from a web worker
Is well know that having access to the FileSystem from a web application have many security issues, so the problem is how we can have access to the FileSystem from the web worker. One first approach is using the HTML5 FileSystem API. This approach has the inconvenient that it operate in a sandbox, so if we have in a desktop app we want to have access to the OS FileSystem. To accomplish this goal we can pass the data from the web worker to the main thread where we can use all the nodejs FileSystem functionalities. Web worker provide a functionality called Transferable Objects, you can get more info here and here that we can use to pass the data received from the module ivona-node in the web worker to the main thread and then use require('fs') in the same way that node-webkit provide us. These are the step you can follow:
install browserify
npm install -g browserify
install ivona-node
npm install ivona-node --save
go to node_modules/ivona-node/src/main.js and change this line:
HttpsPA = require(__dirname + '/proxy');
by this:
HttpsPA = require('./proxy');
create your bundle.js.
Here you have some alternatives, create a bundle.js to allow a require() or put some code in a file with some logic of what you want (you can actually include all the code of the web worker) and then create the bundle.js. In this example I will create the bundle.js only for have access to require() and use importScripts() in the web worker file
browserify -r ivona-node > ibundle.js
Put all together
Modify the code of the web worker and index.html in order to receive the data in the web worker and send it to the main thread (in index.html)
this is the code of web worker (MyWorker.js)
importScripts('ibundle.js');
var Ivona = require('ivona-node');
onmessage = function T2MP3(Text2Speak)
{
var ivona = new Ivona({
accessKey: 'xxxxxxxxxxxx',
secretKey: 'xxxxxxxxxxxx'
});
var req = ivona.createVoice(Text2Speak.data[0], {
body: {
voice: {
name: 'Salli',
language: 'en-US',
gender: 'Female'
}
}
});
req.on('data', function(chunk){
var arr = new Uint8Array(chunk);
postMessage({event: 'data', data: arr}, [arr.buffer]);
});
req.on('end', function(){
postMessage(Text2Speak.data[0]);
});
}
and index.html:
<html>
<head>
<title>apptitle</title>
</head>
<body>
<p><output id="result"></output></p>
<button onclick="startWorker()">Start Worker</button>
<button onclick="stopWorker()">Stop Worker</button>
<br><br>
<script>
var w;
var fs = require('fs');
function startWorker() {
var writer = fs.createWriteStream('text.mp3');
if(typeof(Worker) !== "undefined") {
if(typeof(w) == "undefined") {
w = new Worker("MyWorker.js");
w.postMessage(['This is some text to speak.']);
}
w.onmessage = function(event) {
var data = event.data;
if(data.event !== undefined && data.event == 'data'){
var buffer = new Buffer(data.data);
writer.write(buffer);
}
else{
writer.end();
document.getElementById("result").innerHTML = data;
}
};
} else {
document.getElementById("result").innerHTML = "Sorry! No Web Worker support.";
}
}
function stopWorker() {
w.terminate();
w = undefined;
}
</script>
</body>
</html>

Related

Node.js 'fs' throws an ENOENT error after adding auto-generated Swagger server code

Preamble
To start off, I'm not a developer; I'm just an analyst / product owner with time on their hands. While my team's actual developers have been busy finishing off projects before year-end I've been attempting to put together a very basic API server in Node.js for something we will look at next year.
I used Swagger to build an API spec and then used the Swagger code generator to get a basic Node.js server. The full code is near the bottom of this question.
The Problem
I'm coming across an issue when writing out to a log file using the fs module. I know that the ENOENT error is usually down to just specifying a path incorrectly, but the behaviour doesn't occur when I comment out the Swagger portion of the automatically generated code. (I took the logging code directly out of another tool I built in Node.js, so I'm fairly confident in that portion at least...)
When executing npm start, a few debugging items write to the console:
"Node Server Starting......
Current Directory:/mnt/c/Users/USER/Repositories/PROJECT/api
Trying to log data now!
Mock mode: disabled
PostgreSQL Pool created successfully
Your server is listening on port 3100 (http://localhost:3100)
Swagger-ui is available on http://localhost:3100/docs"
but then fs throws an ENOENT error:
events.js:174
throw er; // Unhandled 'error' event
^
Error: ENOENT: no such file or directory, open '../logs/logEvents2021-12-24.log'
Emitted 'error' event at:
at lazyFs.open (internal/fs/streams.js:277:12)
at FSReqWrap.args [as oncomplete] (fs.js:140:20)
Investigating
Now normally, from what I understand, this would just mean I've got the paths wrong. However, the file has actually been created and the first line of the log file has been written just fine
My next thought was that I must've set the fs flags incorrectly, but it was set to 'a' for append:
var logsFile = fs.createWriteStream(__logdir+"/logEvents"+dateNow()+'.log',{flags: 'a'},(err) =>{
console.error('Could not write new Log File to location: %s \nWith error description: %s',__logdir, err);
});
Removing Swagger Code
Now here's the weird bit: if I remove the Swagger code, the log files write out just fine and I don't get the fs exception!
This is the specific Swagger code:
// swaggerRouter configuration
var options = {
routing: {
controllers: path.join(__dirname, './controllers')
},
};
var expressAppConfig = oas3Tools.expressAppConfig(path.join(__dirname, '/api/openapi.yaml'), options);
var app = expressAppConfig.getApp();
// Initialize the Swagger middleware
http.createServer(app).listen(serverPort, function () {
console.info('Your server is listening on port %d (http://localhost:%d)', serverPort, serverPort);
console.info('Swagger-ui is available on http://localhost:%d/docs', serverPort);
}).on('error',console.error);
When I comment out this code, the log file writes out just fine.
The only thing I can think that might be happening is that somehow Swagger is modifying (?) the app's working directory so that fs no longer finds the same file?
Full Code
'use strict';
var path = require('path');
var fs = require('fs');
var http = require('http');
var oas3Tools = require('oas3-tools');
var serverPort = 3100;
// I am specifically tried using path.join that I found when investigating this issue, and referencing the app path, but to no avail
const __logdir = path.join(__dirname,'./logs');
//These are date and time functions I use to add timestamps to the logs
function dateNow(){
var dateNow = new Date().toISOString().slice(0,10).toString();
return dateNow
}
function rightNow(){
var timeNow = new Date().toTimeString().slice(0,8).toString();
return "["+timeNow+"] "
};
console.info("Node Server Starting......");
console.info("Current Directory: " + __dirname)
// Here I create the WriteStreams
var logsFile = fs.createWriteStream(__logdir+"/logEvents"+dateNow()+'.log',{flags: 'a'},(err) =>{
console.error('Could not write new Log File to location: %s \nWith error description: %s',__logdir, err);
});
var errorsFile = fs.createWriteStream(__logdir+"/errorEvents"+dateNow()+'.log',{flags: 'a'},(err) =>{
console.error('Could not write new Error Log File to location: %s \nWith error description: %s',__logdir, err);
});
// And create an additional console to write data out:
const Console = require('console').Console;
var logOut = new Console(logsFile,errorsFile);
console.info("Trying to log data now!") // Debugging logging
logOut.log("========== Server Startup Initiated ==========");
logOut.log(rightNow() + "Server Directory: "+ __dirname);
logOut.log(rightNow() + "Logs directory: "+__logdir);
// Here is the Swagger portion that seems to create the behaviour.
// It is unedited from the Swagger Code-Gen tool
// swaggerRouter configuration
var options = {
routing: {
controllers: path.join(__dirname, './controllers')
},
};
var expressAppConfig = oas3Tools.expressAppConfig(path.join(__dirname, '/api/openapi.yaml'), options);
var app = expressAppConfig.getApp();
// Initialize the Swagger middleware
http.createServer(app).listen(serverPort, function () {
console.info('Your server is listening on port %d (http://localhost:%d)', serverPort, serverPort);
console.info('Swagger-ui is available on http://localhost:%d/docs', serverPort);
}).on('error',console.error);
In case it helps, this is the project's file structure . I am running this project within a WSL instance in VSCode on Windows, same as I have with other projects using fs.
Is anyone able to help me understand why fs can write the first log line but then break once the Swagger code gets going? Have I done something incredibly stupid?
Appreciate the help, thanks!
Edit: Tried to fix broken images.
Found the problem with some help from a friend. The issue boiled down to a lack of understanding of how the Swagger module works in the background, so this will likely be eye-rollingly obvious to most, but keeping this post around in case anyone else comes across this down the line.
So it seems that as part of the Swagger initialisation, any scripts within the utils folder will also be executed. I would not have picked up on this if it wasn't pointed out to me that in the middle of the console output there was a reference to some PostgreSQL code, even though I had taken all reference to it out of the main index.js file.
That's when I realised that the error wasn't actually being generated from the code posted above: it was being thrown from to that folder.
So I guess the answer is don't add stuff to the utils folder, but if you do, always add a bunch of console logging...

Node.JS exec git command error: Permission denied (publickey)

Question: How can I run git push from node.js with passphrase.
I'm trying to build a small module where I need to run git push from node.js to a remote repo, but I'm getting an error when I with from node.js exec but not from the terminal.
My code.
./command.ts
import * as util from 'util';
const exec = util.promisify(require('child_process').exec);
export default function command(command: string): Promise<string> {
return exec(command, {cwd: process.cwd()}).then((resp) => {
const data = resp.stdout.toString().replace(/[\n\r]/g, '');
return Promise.resolve(data);
});
}
./index.ts
import command from './command';
async function init() {
try {
await command('git add .');
await command('git commit -m "my commit" ');
conat result = await command('git push');
} catch (e) {
console.log(e);
}
}
init();
and when I run ts-node ./index.ts I get the following error.
Error: Command failed: git push
git#hostname.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
But when I run git push from the terminal I get prompt with the passphrase and it works.
Any idea on how to solve this issue, is there a way to run git push with passphrase using node.js?
bear in mind that I will love to fix this without any external libs.
Thanks in advance.
As described here, check if the same program works when:
ssh-agent is launched
your key is ssh-add'ed
Not only the prompt should no longer query for your passphrase (now cached by the agent), but your script might benefit from that cache as well.
You may need to add env: process.env to your exec() options if your key is loaded into an ssh-agent process. There are some environment variables ssh-agent exports that other processes use to find ssh-agent to access its keys.

How to automate desktop application developed using electron framework?

Our application is developed using electron framework. it is a standalone application. I have seen that spectron is the framework which is used to automate electron application. but i am not sure whether it is applicable for desktop application. Please confirm the same.
I have installed nodejs and spectron.
I have written a code launch application as mention in the following site
https://electron.atom.io/spectron/
File Name : First.js
var Application = require('spectron').Application
var assert = require('assert')
var app = new Application({
path: 'C:\Users\ramass\AppData\Local\Programs\ngsolutions\ngsolutions.exe'
})
app.start().then(function () {
// Check if the window is visible
return app.browserWindow.isVisible()
}).then(function (isVisible) {
// Verify the window is visible
assert.equal(isVisible, true)
}).then(function () {
// Get the window's title
return app.client.getTitle()
}).then(function (title) {
// Verify the window's title
assert.equal(title, 'My App')
}).then(function () {
// Stop the application
return app.stop()
}).catch(function (error) {
// Log any failures
console.error('Test failed', error.message)
})
i have tried to run the script using command
node First.js
But i am getting error saying that
C:\spectronprgs>node First.js
Error: Cannot find module 'spectron'
Please let me know whether I am going towards right path
how to launch .exe file using spectron framework
how to run the script
run the following from the command line.
npm install --save-dev spectron
Then see if you can find the module. You never mentioned in your post how you installed spectron.

How to dynamically set files to be used in karma test

I have node file that is running a karma test in a node app using the karma public api (I'll save writing out the code because it comes straight from http://karma-runner.github.io/0.13/dev/public-api.html).
All is fine so far, the test runs. Now I need to start serving different files to the karma run. For example, I might have exampleSpec.js, example1.js, example2.js, and example3.js. I need to serve exampleSpec and then example1-3 in sequence.
However, I don't see any documentation on this, and can't seem to get anywhere on.
So, The answer ended up being pretty simple. The first argument to the server constructor is a config object, that can replace or augment the karma.conf.js, so it is posible to send in altered files arrays. Code below for posterity:
"use strict";
var Server = require('karma').Server;
var filePath = process.cwd();
filePath += "/karma.conf.js";
console.log(filePath);
//files needs to be an array of string file matchers
function runTests(files, port) {
var config = {
configFile: filePath,
files: files,
port: port
};
var server = new Server(config, function(exitCode) {
console.log('Karma has server exited with ' + exitCode);
process.exit(exitCode)
});
server.on('run_complete', function (browser, result) {
console.log('A browser run was completed');
console.log(result);
});
server.start();
}
runTests(['test/**/*Spec.js', 'tmp/example.js'], 9876);
runTests(['test/**/*Spec.js', 'tmp/example2.js'], 9877);
runTests(['test/**/*Spec.js', 'tmp/example3.js'], 9878);

node and Error: EMFILE, too many open files

For some days I have searched for a working solution to an error
Error: EMFILE, too many open files
It seems that many people have the same problem. The usual answer involves increasing the number of file descriptors. So, I've tried this:
sysctl -w kern.maxfiles=20480
The default value is 10240. This is a little strange in my eyes, because the number of files I'm handling in the directory is under 10240. Even stranger, I still receive the same error after I've increased the number of file descriptors.
Second question:
After a number of searches I found a work around for the "too many open files" problem:
var requestBatches = {};
function batchingReadFile(filename, callback) {
// First check to see if there is already a batch
if (requestBatches.hasOwnProperty(filename)) {
requestBatches[filename].push(callback);
return;
}
// Otherwise start a new one and make a real request
var batch = requestBatches[filename] = [callback];
FS.readFile(filename, onRealRead);
// Flush out the batch on complete
function onRealRead() {
delete requestBatches[filename];
for (var i = 0, l = batch.length; i < l; i++) {
batch[i].apply(null, arguments);
}
}
}
function printFile(file){
console.log(file);
}
dir = "/Users/xaver/Downloads/xaver/xxx/xxx/"
var files = fs.readdirSync(dir);
for (i in files){
filename = dir + files[i];
console.log(filename);
batchingReadFile(filename, printFile);
Unfortunately I still recieve the same error.
What is wrong with this code?
For when graceful-fs doesn't work... or you just want to understand where the leak is coming from. Follow this process.
(e.g. graceful-fs isn't gonna fix your wagon if your issue is with sockets.)
From My Blog Article: http://www.blakerobertson.com/devlog/2014/1/11/how-to-determine-whats-causing-error-connect-emfile-nodejs.html
How To Isolate
This command will output the number of open handles for nodejs processes:
lsof -i -n -P | grep nodejs
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
...
nodejs 12211 root 1012u IPv4 151317015 0t0 TCP 10.101.42.209:40371->54.236.3.170:80 (ESTABLISHED)
nodejs 12211 root 1013u IPv4 151279902 0t0 TCP 10.101.42.209:43656->54.236.3.172:80 (ESTABLISHED)
nodejs 12211 root 1014u IPv4 151317016 0t0 TCP 10.101.42.209:34450->54.236.3.168:80 (ESTABLISHED)
nodejs 12211 root 1015u IPv4 151289728 0t0 TCP 10.101.42.209:52691->54.236.3.173:80 (ESTABLISHED)
nodejs 12211 root 1016u IPv4 151305607 0t0 TCP 10.101.42.209:47707->54.236.3.172:80 (ESTABLISHED)
nodejs 12211 root 1017u IPv4 151289730 0t0 TCP 10.101.42.209:45423->54.236.3.171:80 (ESTABLISHED)
nodejs 12211 root 1018u IPv4 151289731 0t0 TCP 10.101.42.209:36090->54.236.3.170:80 (ESTABLISHED)
nodejs 12211 root 1019u IPv4 151314874 0t0 TCP 10.101.42.209:49176->54.236.3.172:80 (ESTABLISHED)
nodejs 12211 root 1020u IPv4 151289768 0t0 TCP 10.101.42.209:45427->54.236.3.171:80 (ESTABLISHED)
nodejs 12211 root 1021u IPv4 151289769 0t0 TCP 10.101.42.209:36094->54.236.3.170:80 (ESTABLISHED)
nodejs 12211 root 1022u IPv4 151279903 0t0 TCP 10.101.42.209:43836->54.236.3.171:80 (ESTABLISHED)
nodejs 12211 root 1023u IPv4 151281403 0t0 TCP 10.101.42.209:43930->54.236.3.172:80 (ESTABLISHED)
....
Notice the: 1023u (last line) - that's the 1024th file handle which is the default maximum.
Now, Look at the last column. That indicates which resource is open. You'll probably see a number of lines all with the same resource name. Hopefully, that now tells you where to look in your code for the leak.
If you don't know multiple node processes, first lookup which process has pid 12211. That'll tell you the process.
In my case above, I noticed that there were a bunch of very similar IP Addresses. They were all 54.236.3.### By doing ip address lookups, was able to determine in my case it was pubnub related.
Command Reference
Use this syntax to determine how many open handles a process has open...
To get a count of open files for a certain pid
I used this command to test the number of files that were opened after doing various events in my app.
lsof -i -n -P | grep "8465" | wc -l
# lsof -i -n -P | grep "nodejs.*8465" | wc -l
28
# lsof -i -n -P | grep "nodejs.*8465" | wc -l
31
# lsof -i -n -P | grep "nodejs.*8465" | wc -l
34
What is your process limit?
ulimit -a
The line you want will look like this:
open files (-n) 1024
Permanently change the limit:
tested on Ubuntu 14.04, nodejs v. 7.9
In case you are expecting to open many connections (websockets is a good example), you can permanently increase the limit:
file: /etc/pam.d/common-session (add to the end)
session required pam_limits.so
file: /etc/security/limits.conf (add to the end, or edit if already exists)
root soft nofile 40000
root hard nofile 100000
restart your nodejs and logout/login from ssh.
this may not work for older NodeJS you'll need to restart server
use instead of if your node runs with different uid.
Using the graceful-fs module by Isaac Schlueter (node.js maintainer) is probably the most appropriate solution. It does incremental back-off if EMFILE is encountered. It can be used as a drop-in replacement for the built-in fs module.
I am not sure whether this will help anyone, I started working on a big project with lot of dependencies which threw me the same error. My colleague suggested me to install watchman using brew and that fixed this problem for me.
brew update
brew install watchman
Edit on 26 June 2019:
Github link to watchman
I did all the stuff above mentioned for the same problem but nothing worked. I tried below it worked 100%. Simple config changes.
Option 1: Set limit (It won't work most of the time)
user#ubuntu:~$ ulimit -n 65535
Check the current limit
user#ubuntu:~$ ulimit -n
1024
Option 2: Increase the available limit to e.g. 65535
user#ubuntu:~$ sudo nano /etc/sysctl.conf
Add the following line to it
fs.file-max = 65535
Run this to refresh with new config
user#ubuntu:~$ sudo sysctl -p
Edit the following file
user#ubuntu:~$ sudo vim /etc/security/limits.conf
Add the following lines to it
root soft nproc 65535
root hard nproc 65535
root soft nofile 65535
root hard nofile 65535
Edit the following file
user#ubuntu:~$ sudo vim /etc/pam.d/common-session
Add this line to it
session required pam_limits.so
Logout and login and try the following command
user#ubuntu:~$ ulimit -n
65535
Option 3: Just add this line
DefaultLimitNOFILE=65535
to /etc/systemd/system.conf and /etc/systemd/user.conf
I ran into this problem today, and finding no good solutions for it, I created a module to address it. I was inspired by #fbartho's snippet, but wanted to avoid overwriting the fs module.
The module I wrote is Filequeue, and you use it just like fs:
var Filequeue = require('filequeue');
var fq = new Filequeue(200); // max number of files to open at once
fq.readdir('/Users/xaver/Downloads/xaver/xxx/xxx/', function(err, files) {
if(err) {
throw err;
}
files.forEach(function(file) {
fq.readFile('/Users/xaver/Downloads/xaver/xxx/xxx/' + file, function(err, data) {
// do something here
}
});
});
You're reading too many files. Node reads files asynchronously, it'll be reading all files at once. So you're probably reading the 10240 limit.
See if this works:
var fs = require('fs')
var events = require('events')
var util = require('util')
var path = require('path')
var FsPool = module.exports = function(dir) {
events.EventEmitter.call(this)
this.dir = dir;
this.files = [];
this.active = [];
this.threads = 1;
this.on('run', this.runQuta.bind(this))
};
// So will act like an event emitter
util.inherits(FsPool, events.EventEmitter);
FsPool.prototype.runQuta = function() {
if(this.files.length === 0 && this.active.length === 0) {
return this.emit('done');
}
if(this.active.length < this.threads) {
var name = this.files.shift()
this.active.push(name)
var fileName = path.join(this.dir, name);
var self = this;
fs.stat(fileName, function(err, stats) {
if(err)
throw err;
if(stats.isFile()) {
fs.readFile(fileName, function(err, data) {
if(err)
throw err;
self.active.splice(self.active.indexOf(name), 1)
self.emit('file', name, data);
self.emit('run');
});
} else {
self.active.splice(self.active.indexOf(name), 1)
self.emit('dir', name);
self.emit('run');
}
});
}
return this
};
FsPool.prototype.init = function() {
var dir = this.dir;
var self = this;
fs.readdir(dir, function(err, files) {
if(err)
throw err;
self.files = files
self.emit('run');
})
return this
};
var fsPool = new FsPool(__dirname)
fsPool.on('file', function(fileName, fileData) {
console.log('file name: ' + fileName)
console.log('file data: ', fileData.toString('utf8'))
})
fsPool.on('dir', function(dirName) {
console.log('dir name: ' + dirName)
})
fsPool.on('done', function() {
console.log('done')
});
fsPool.init()
Like all of us, you are another victim of asynchronous I/O. With asynchronous calls, if you loop around a lot of files, Node.js will start to open a file descriptor for each file to read and then will wait for action until you close it.
File descriptor remains open until resource is available on your server to read it. Even if your files are small and reading or updating is fast, it takes some time, but in the same time your loop don't stop to open new files descriptor. So if you have too many files, the limit will be soon reached and you get a beautiful EMFILE.
There is one solution, creating a queue to avoid this effect.
Thanks to people who wrote Async, there is a very useful function for that. There is a method called Async.queue, you create a new queue with a limit and then add filenames to the queue.
Note: If you have to open many files, it would be a good idea to store which files are currently open and don't reopen them infinitely.
const fs = require('fs')
const async = require("async")
var q = async.queue(function(task, callback) {
console.log(task.filename);
fs.readFile(task.filename,"utf-8",function (err, data_read) {
callback(err,task.filename,data_read);
}
);
}, 4);
var files = [1,2,3,4,5,6,7,8,9,10]
for (var file in files) {
q.push({filename:file+".txt"}, function (err,filename,res) {
console.log(filename + " read");
});
}
You can see that each file is added to the queue (console.log filename), but only when the current queue is under the limit you set previously.
async.queue get information about availability of the queue through a callback, this callback is called only when data file is read and any action you have to do is achieved. (see fileRead method)
So you cannot be overwhelmed by files descriptor.
> node ./queue.js
0.txt
1.txt
2.txt
0.txt read
3.txt
3.txt read
4.txt
2.txt read
5.txt
4.txt read
6.txt
5.txt read
7.txt
1.txt read (biggest file than other)
8.txt
6.txt read
9.txt
7.txt read
8.txt read
9.txt read
I just finished writing a little snippet of code to solve this problem myself, all of the other solutions appear way too heavyweight and require you to change your program structure.
This solution just stalls any fs.readFile or fs.writeFile calls so that there are no more than a set number in flight at any given time.
// Queuing reads and writes, so your nodejs script doesn't overwhelm system limits catastrophically
global.maxFilesInFlight = 100; // Set this value to some number safeish for your system
var origRead = fs.readFile;
var origWrite = fs.writeFile;
var activeCount = 0;
var pending = [];
var wrapCallback = function(cb){
return function(){
activeCount--;
cb.apply(this,Array.prototype.slice.call(arguments));
if (activeCount < global.maxFilesInFlight && pending.length){
console.log("Processing Pending read/write");
pending.shift()();
}
};
};
fs.readFile = function(){
var args = Array.prototype.slice.call(arguments);
if (activeCount < global.maxFilesInFlight){
if (args[1] instanceof Function){
args[1] = wrapCallback(args[1]);
} else if (args[2] instanceof Function) {
args[2] = wrapCallback(args[2]);
}
activeCount++;
origRead.apply(fs,args);
} else {
console.log("Delaying read:",args[0]);
pending.push(function(){
fs.readFile.apply(fs,args);
});
}
};
fs.writeFile = function(){
var args = Array.prototype.slice.call(arguments);
if (activeCount < global.maxFilesInFlight){
if (args[1] instanceof Function){
args[1] = wrapCallback(args[1]);
} else if (args[2] instanceof Function) {
args[2] = wrapCallback(args[2]);
}
activeCount++;
origWrite.apply(fs,args);
} else {
console.log("Delaying write:",args[0]);
pending.push(function(){
fs.writeFile.apply(fs,args);
});
}
};
With bagpipe, you just need change
FS.readFile(filename, onRealRead);
=>
var bagpipe = new Bagpipe(10);
bagpipe.push(FS.readFile, filename, onRealRead))
The bagpipe help you limit the parallel. more details: https://github.com/JacksonTian/bagpipe
Had the same problem when running the nodemon command so i reduced the name of files open in sublime text and the error dissappeared.
cwait is a general solution for limiting concurrent executions of any functions that return promises.
In your case the code could be something like:
var Promise = require('bluebird');
var cwait = require('cwait');
// Allow max. 10 concurrent file reads.
var queue = new cwait.TaskQueue(Promise, 10);
var read = queue.wrap(Promise.promisify(batchingReadFile));
Promise.map(files, function(filename) {
console.log(filename);
return(read(filename));
})
Building on #blak3r's answer, here's a bit of shorthand I use in case it helps other diagnose:
If you're trying to debug a Node.js script that is running out of file descriptors here's a line to give you the output of lsof used by the node process in question:
openFiles = child_process.execSync(`lsof -p ${process.pid}`);
This will synchronously run lsof filtered by the current running Node.js process and return the results via buffer.
Then use console.log(openFiles.toString()) to convert the buffer to a string and log the results.
For nodemon users:
Just use the --ignore flag to solve the problem.
Example:
nodemon app.js --ignore node_modules/ --ignore data/
Use the latest fs-extra.
I had that problem on Ubuntu (16 and 18) with plenty of file/socket-descriptors space (count with lsof |wc -l). Used fs-extra version 8.1.0. After the update to 9.0.0 the "Error: EMFILE, too many open files" vanished.
I've experienced diverse problems on diverse OS' with node handling filesystems. Filesystems are obviously not trivial.
I solved this by updating watchman
brew install watchman
I did installing watchman, changing limit etc. and it didn't work in Gulp.
Restarting iterm2 actually helped though.
For anyone that might still be looking for solutions, using async-await worked fine for me:
fs.readdir(<directory path></directory>, async (err, filenames) => {
if (err) {
console.log(err);
}
try {
for (let filename of filenames) {
const fileContent = await new Promise((resolve, reject) => {
fs.readFile(<dirctory path + filename>, 'utf-8', (err, content) => {
if (err) {
reject(err);
}
resolve(content);
});
});
... // do things with fileContent
}
} catch (err) {
console.log(err);
}
});
Here's my two cents: Considering a CSV file is just lines of text I've streamed the data (strings) to avoid this problem.
Easiest solution for me that worked in my usecase.
It can be used with graceful fs or standard fs. Just note that there won't be headers in the file when creating.
// import graceful-fs or normal fs
const fs = require("graceful-fs"); // or use: const fs = require("fs")
// Create output file and set it up to receive streamed data
// Flag is to say "append" so that data can be recursively added to the same file
let fakeCSV = fs.createWriteStream("./output/document.csv", {
flags: "a",
});
and the data that needs to be streamed to the file i've done like this
// create custom streamer that can be invoked when needed
const customStreamer = (dataToWrite) => {
fakeCSV.write(dataToWrite + "\n");
};
Note that the dataToWrite is simply a string with a custom seperator like ";" or ",".
i.e.
const dataToWrite = "batman" + ";" + "superman"
customStreamer(dataToWrite);
This writes "batman;superman" to the file.
Note that there's no error catching or whatsoever in this example.
Docs: https://nodejs.org/api/fs.html#fs_fs_createwritestream_path_options
This will probably fix your problem if you're struggling to deploy a React solution that was created with the Visual Studio template (and has a web.config). In Azure Release Pipelines, when selecting the template, use:
Azure App Service deployment
Instead of:
Deploy a Node.js app to Azure App Service
It worked for me!
There's another possibility that hasn't been considered or discussed in any of the answers so far: symbolic link cycles.
Node's recursive filesystem watcher does not appear to detect and handle cycles of symlinks. So you can easily trigger this error with an arbitrarily high nfiles ulimit by simply running:
mkdir a
mkdir a/b
cd a/b
ln -s .. c
GNU find will notice the symlink cycle and abort:
$ find a -follow
a
a/b
find: File system loop detected; ā€˜a/b/cā€™ is part of the same file system loop as ā€˜aā€™.
but node won't. If you set up a watch on the tree, it'll spew a EMFILE, too many open files error.
Amongst other things this can happen in node_modules where there's a containment relationship:
parent/
package.json
child/
package.json
which is how I encountered it in a project I was trying to build.
Note that you don't necessarily need to overcomplicate this issue, trying again works just fine.
import { promises as fs } from "fs";
const filepaths = [];
const errors = [];
function process_file(content: string) {
// logic here
}
await Promise.all(
filepaths.map(function read_each(filepath) {
return fs
.readFile(filepath, "utf8")
.then(process_file)
.catch(function (error) {
if (error.code === "EMFILE") return read_each(filepath);
else errors.push({ file: filepath, error });
});
}),
);
On Windows, there is seems that no the ulimit command to increase the number of open files. In graceful-fs, it maintains a queue to run I/O operations, eg: read/write file.
However, fs.readFile, fs.writeFile are based on fs.open, so you will need open/close files manually to solve this error.
import fs from 'fs/promises';
const fd = await fs.open('path-to-file', 'r');
await fd.readFile('utf-8'); // <== read through file handle
await fd.close(); // <== manually close it
I had this issue, and i solved it by running npm update and it worked.
In some cases you may need to remove node_modules rm -rf node_modules/
This may happen after changing the Node version
ERR emfile too many open files
Restart the computer
brew install watchman
It should be absolutely fixed the issue
first update your version of expo using expo update and then run yarn / npm install. This solved the issue for me!

Categories

Resources