I am currently developing a custom module for my magic mirror.
I want this module to execute a python script.
This python script fetches data from a web server and creates a .json file in the module folder with the data.
I then want the module to import this datafile inside javascript and display it on screen.
However i cant get the magic mirror module to run the python script.
I have very little javascript knowledge so any help is appreciated.
This is the code i have so far
defaults: {
},
start: function () {
var timer = setInterval(()=>{
const spawn = require('child_process').spawn;
const childPython = spawn('python3', ['./modules/MMM-Test/bussavganger.py']);
this.updateDom()
}, 5000)
},
getDom: function() {
var element = document.createElement("div")
element.className = "myContent"
element.innerHTML = "Hello, everybody!"
return element
}
})
Currently i am just trying to run the module to see if the .json file is created. it is not.
If i run the python script separately the file is created, so i know the .py file is not the problem.
You tried calling the python script via your module's js file, but instead you should use to call the python script via the node_helper.js file.
So use the socketNotification function of the MagicMirror to call the noder_helper and in return the node_helper then calls your python script, you can do something with it and at the end send back a socketNotification to your module's js file, e. g. the result of your python program or the exit code, etc.
In your start: function() you could call the node_helper via this command, so that your python program is being started by the module helper later directly after booting up your module (doing that from now on every interval):
var self = this;
setInterval(function() {
self.sendSocketNotification('DO_PYTHON', <transmit data for your node_helper here>);
self.updateDom();
}, 5000);
Create a node_helper.js file in your module folder with the following:
var NodeHelper = require("node_helper");
const spawn = require("child_process").spawn;
module.exports = NodeHelper.create({
init() {
},
start() {
},
stop() {
},
// If notification of the main.js file is received, the node_helper will do this here:
socketNotificationReceived(notification, payload) {
if (notification === "DO_PYTHON") {
// this.config = payload;
this.yourOwnMethod();
} else {
// ...
}
},
yourOwnMethod() {
var self = this;
var process = spawn("python3", ["/absolute/path/to/modules/MMM-Test/bussavganger.py"]);
// do something else here
this.sendSocketNotification("PYTHON_DONE", <e. g. exit state as your payload>)
},
You can read in your json file with fs in the node_helper.js as well and parse it there and send it back via the sendSocketNotification.
Be sure to have included the two beginning lines in the node_helper.js and (!) important use always absolute paths.
I'm building a project where the backend consists of 3 parts:
message-process.js - Get new messages every 1 sec => process the data => add new item to the DB
events-process.js - Listen to the DB until there is a new item in DB => process the item => might add a new item to DB
stats-process.js - Listen to the DB until there is a new item in DB => process the item => might add a new item to DB
Is there any performance difference if I run each file like this:
node message-process.js
node events-process.js
node stats-process.js
VS
Export from each file the main function and then build an index.js file:
const main-func1 = require('./message-process.js');
const main-func2 = require('./events-process.js');
const main-func3 = require('./stats-process.js');
console.log('staring main func1...');
main-func1();
console.log('staring main func2...');
main-func2();
console.log('staring main func3...');
main-func3();
Then running the single index.js file:
node index.js
Is there any a decline in performance in one of the options? There is any 'best practice'?
Thanks
first of all, dont use - instead use Camel case for naming
you wrote a structured synchronous code so that will execute funct1 then will wait until ends its execution then will execute func2 and so on... it will not execute them at the same time. if you want to do that you must use Node Workers.
We are building an Electron app that allows users to supply their own 'modules' to run. We are looking for a way to require the modules but then delete or kill the modules if need be.
We have looked a few tutorials that seem to discuss this topic but we can't seem to get the modules to fully terminate. We explored this by using timers inside the modules and can observe the timers still running even after the module reference is deleted.
https://repl.it/repls/QuerulousSorrowfulQuery
index.js
// Load module
let Mod = require('./mod.js');
// Call the module function (which starts a setInterval)
Mod();
// Delete the module after 3 seconds
setTimeout(function () {
Mod = null;
delete Mod;
console.log('Deleted!')
}, 3000);
./mod.js
function Mod() {
setInterval(function () {
console.log('Mod log');
}, 1000);
}
module.exports = Mod;
Expected output
Mod log
Mod log
Deleted!
Actual output
Mod log
Mod log
Deleted!
Mod log
...
(continues to log 'Mod log' indefinitely)
Maybe we are overthinking it and maybe the modules won't be memory hogs, but the modules we load will have very intensive workloads and having the ability to stop them at will seems important.
Edit with real use-case
This is how we are currently using this technique. The two issues are loading the module in the proper fashion and unloading the module after it is done.
renderer.js (runs in a browser context with access to document, etc)
const webview = document.getElementById('webview'); // A webview object essentially gives us control over a webpage similar to how one can control an iframe in a regular browser.
const url = 'https://ourserver.com/module.js';
let mod;
request({
method: 'get',
url: url,
}, function (err, httpResponse, body) {
if (!err) {
mod = requireFromString(body, url); // Module is loaded
mod(webview); // Module is run
// ...
// Some time later, the module needs to be 'unloaded'.
// We are currently 'unloading' it by dereferencing the 'mod' variable, but as mentioned above, this doesn't really work. So we would like to have a way to wipe the module and timers and etc and free up any memory or resources it was using!
mod = null;
delete mod;
}
})
function requireFromString(src, filename) {
var Module = module.constructor;
var m = new Module();
m._compile(src, filename);
return m.exports;
}
https://ourserver.com/module.js
// This code module will only have access to node modules that are packaged with our app but that is OK for now!
let _ = require('lodash');
let obj = {
key: 'value'
}
async function main(webview) {
console.log(_.get(obj, 'key')) // prints 'value'
webview.loadURL('https://google.com') // loads Google in the web browser
}
module.exports = main;
Just in case anyone reading is not familiar with Electron, the renderer.js has access to 'webview' elements which are almost identical to iframes. This is why passing it to the 'module.js' will allow the module to access manipulate the webpage such as change URL, click buttons on that webpage, etc.
There is no way to kill a module and stop or close any resources that it is using. That's just not a feature of node.js. Such a module could have timers, open files, open sockets, running servers, etc... In addition node.js does not provide a means of "unloading" code that was once loaded.
You can remove a module from the module cache, but that doesn't affect the existing, already loaded code or its resources.
The only foolproof way I know of would be to load the user's module in a separate node.js app loaded as a child process and then you can exit that process or kill that process and then the OS will reclaim any resources it was using and unload everything from memory. This child process scheme also has the advantage that the user's code is more isolated from your main server code. You could even further isolate it by running this other process in a VM if you wanted to.
I do not have much knowledge about javascript. I have written in C++ a shared library that does certain things in a daemon thread. I needed this to be invoked from javascript. By using SWIG I've successfully able to generate a wrapper and compile my code along with it into .node module using node-gyp (wrote binding.gyp for it too). Now i can drop to node prompt and do something like:
> var a = require("./module_name")
> a.SomeCppFunction("SomeString")
and wonderfully invoke the cpp functions, start a detached thread there and return the control back to javascript. However I want to notify the javascript from the detached cpp thread about stuffs. I tried registering javascript functions by collecting function() {} signature types in void(*fp)() etc., to call them back later from c++, but that didn't work. Is there anyway to be able to achieve this ie., register javascript functions (or something else) as callback in the cpp code ?
You can use a combination of SWIG and Napi. An existing repo which does this is available here, with a blog here. But I'll sum up the process here.
Create your class to use in SWIG, which has a thread running in the threadMain method :
#include <Thread.H>
class Test : public ThreadedMethod {
void *threadMain(void);
public:
Test();
void setFnPointer(const char* s);
};
Now in the Napi code, you will generate your thread safe function like so :
Napi::ThreadSafeFunction tsfn; ///< The node api's threadsafe function
Napi::Value Start( const Napi::CallbackInfo& info ){
Napi::Env env = info.Env();
// Create a ThreadSafeFunction
tsfn = Napi::ThreadSafeFunction::New(env,
info[0].As<Napi::Function>(), // JavaScript function to call
"Resource Name", 0,1);
// return the tsfn as a pointer in a string
char addr[24];
sprintf(addr,"%p",&tsfn);
return Napi::String::New(env, addr);
}
// some small code to call NODE_API_MODULE here, check the file NapiCode.C in the repo
You compile the SWIG code to one module and the Napi code down to a different module and you pass the thread safe funciton pointer from one to the other like so :
var libNapiNodejs = require('../swig/.libs/libNapiNodejs');
let fp = libNapiNodejs.start(function () {
console.log("JavaScript callback called with arguments", Array.from(arguments));
}, 5);
// SWIG get our C++ and thread running
var libSwigCNodejs = require('../swig/.libs/libSwigCNodejs');
let test = new libSwigCNodejs.Test;
test.setFnPointer(fp); // tell swig the callback function pointer to execute
test.run(); // run the C++ thread in the SWIG module
You will see that the C++ thread calls the javascript function. This is what the C++ thread looks like in SWIG :
Napi::ThreadSafeFunction *tsfn; ///< The node api's threadsafe function
void *Test::threadMain(void){
printf("C++ Thread enter %s\n",__func__);
auto callback = []( Napi::Env env, Napi::Function jsCallback, int* value ) {
jsCallback.Call( {Napi::Number::New( env, *value )} );
};
for (int i=0; i<10; i++){
sleep(1);
if (*tsfn) {
printf("calling tsfn->BlockingCall\n");
napi_status status = tsfn->BlockingCall( &i, callback );
if ( status != napi_ok ) // Handle error
break;
}
}
tsfn->Release();
printf("C++ Thread exit %s\n",__func__);
return NULL;
}
I need in node.js function
result = execSync('node -v');
that will synchronously execute the given command line and return all stdout'ed by that command text.
ps. Sync is wrong. I know. Just for personal use.
UPDATE
Now we have mgutz's solution which gives us exit code, but not stdout! Still waiting for a more precise answer.
UPDATE
mgutz updated his answer and the solution is here :)
Also, as dgo.a mentioned, there is stand-alone module exec-sync
UPDATE 2014-07-30
ShellJS lib arrived. Consider this is the best choice for now.
UPDATE 2015-02-10
AT LAST! NodeJS 0.12 supports execSync natively.
See official docs
Node.js (since version 0.12 - so for a while) supports execSync:
child_process.execSync(command[, options])
You can now directly do this:
const execSync = require('child_process').execSync;
code = execSync('node -v');
and it'll do what you expect. (Defaults to pipe the i/o results to the parent process). Note that you can also spawnSync now.
See execSync library.
It's fairly easy to do with node-ffi. I wouldn't recommend for server processes, but for general development utilities it gets things done. Install the library.
npm install node-ffi
Example script:
var FFI = require("node-ffi");
var libc = new FFI.Library(null, {
"system": ["int32", ["string"]]
});
var run = libc.system;
run("echo $USER");
[EDIT Jun 2012: How to get STDOUT]
var lib = ffi.Library(null, {
// FILE* popen(char* cmd, char* mode);
popen: ['pointer', ['string', 'string']],
// void pclose(FILE* fp);
pclose: ['void', [ 'pointer']],
// char* fgets(char* buff, int buff, in)
fgets: ['string', ['string', 'int','pointer']]
});
function execSync(cmd) {
var
buffer = new Buffer(1024),
result = "",
fp = lib.popen(cmd, 'r');
if (!fp) throw new Error('execSync error: '+cmd);
while(lib.fgets(buffer, 1024, fp)) {
result += buffer.readCString();
};
lib.pclose(fp);
return result;
}
console.log(execSync('echo $HOME'));
Use ShellJS module.
exec function without providing callback.
Example:
var version = exec('node -v').output;
There's an excellent module for flow control in node.js called asyncblock. If wrapping the code in a function is OK for your case, the following sample may be considered:
var asyncblock = require('asyncblock');
var exec = require('child_process').exec;
asyncblock(function (flow) {
exec('node -v', flow.add());
result = flow.wait();
console.log(result); // There'll be trailing \n in the output
// Some other jobs
console.log('More results like if it were sync...');
});
Native Node.js solution is:
const {execSync} = require('child_process');
const result = execSync('node -v'); // 👈 this do the trick
Just be aware that some commands returns Buffer instead of string. And if you need string just add encoding to execSync options:
const result = execSync('git rev-parse HEAD', {encoding: 'utf8'});
... and it is also good to have timeout on sync exec:
const result = execSync('git rev-parse HEAD', {encoding: 'utf8', timeout: 10000});
This is not possible in Node.js, both child_process.spawn and child_process.exec were built from the ground up to be async.
For details see: https://github.com/ry/node/blob/master/lib/child_process.js
If you really want to have this blocking, then put everything that needs to happen afterwards in a callback, or build your own queue to handle this in a blocking fashion, I suppose you could use Async.js for this task.
Or, in case you have way too much time to spend, hack around in Node.js it self.
This is the easiest way I found:
exec-Sync:
https://github.com/jeremyfa/node-exec-sync
(Not to be confused with execSync.)
Execute shell command synchronously. Use this for migration scripts, cli programs, but not for regular server code.
Example:
var execSync = require('exec-sync');
var user = execSync('echo $USER');
console.log(user);
Just to add that even though there are few usecases where you should use them, spawnSync / execFileSync / execSync were added to node.js in these commits: https://github.com/joyent/node/compare/d58c206862dc...e8df2676748e
You can achieve this using fibers. For example, using my Common Node library, the code would look like this:
result = require('subprocess').command('node -v');
my way since 5 years is to have 2 lines ;
const { execSync } = require('child_process');
const shell = (cmd) => execSync(cmd, {encoding: 'utf8'});
Then enjoy:
shell('git remote -v')
or
out = shell('ls -l')
.. so on
I get used to implement "synchronous" stuff at the end of the callback function. Not very nice, but it works. If you need to implement a sequence of command line executions you need to wrap exec into some named function and recursively call it.
This pattern seem to be usable for me:
SeqOfExec(someParam);
function SeqOfExec(somepParam) {
// some stuff
// .....
// .....
var execStr = "yourExecString";
child_proc.exec(execStr, function (error, stdout, stderr) {
if (error != null) {
if (stdout) {
throw Error("Smth goes wrong" + error);
} else {
// consider that empty stdout causes
// creation of error object
}
}
// some stuff
// .....
// .....
// you also need some flag which will signal that you
// need to end loop
if (someFlag ) {
// your synch stuff after all execs
// here
// .....
} else {
SeqOfExec(someAnotherParam);
}
});
};
I had a similar problem and I ended up writing a node extension for this. You can check out the git repository. It's open source and free and all that good stuff !
https://github.com/aponxi/npm-execxi
ExecXI is a node extension written in C++ to execute shell commands
one by one, outputting the command's output to the console in
real-time. Optional chained, and unchained ways are present; meaning
that you can choose to stop the script after a command fails
(chained), or you can continue as if nothing has happened !
Usage instructions are in the ReadMe file. Feel free to make pull requests or submit issues!
EDIT: However it doesn't return the stdout yet... Just outputs them in real-time. It does now. Well, I just released it today. Maybe we can build on it.
Anyway, I thought it was worth to mention it.
you can do synchronous shell operations in nodejs like so:
var execSync = function(cmd) {
var exec = require('child_process').exec;
var fs = require('fs');
//for linux use ; instead of &&
//execute your command followed by a simple echo
//to file to indicate process is finished
exec(cmd + " > c:\\stdout.txt && echo done > c:\\sync.txt");
while (true) {
//consider a timeout option to prevent infinite loop
//NOTE: this will max out your cpu too!
try {
var status = fs.readFileSync('c:\\sync.txt', 'utf8');
if (status.trim() == "done") {
var res = fs.readFileSync("c:\\stdout.txt", 'utf8');
fs.unlinkSync("c:\\stdout.txt"); //cleanup temp files
fs.unlinkSync("c:\\sync.txt");
return res;
}
} catch(e) { } //readFileSync will fail until file exists
}
};
//won't return anything, but will take 10 seconds to run
console.log(execSync("sleep 10"));
//assuming there are a lot of files and subdirectories,
//this too may take a while, use your own applicable file path
console.log(execSync("dir /s c:\\usr\\docs\\"));
EDIT - this example is meant for windows environments, adjust for your own linux needs if necessary
I actually had a situation where I needed to run multiple commands one after another from a package.json preinstall script in a way that would work on both Windows and Linux/OSX, so I couldn't rely on a non-core module.
So this is what I came up with:
#cmds.coffee
childproc = require 'child_process'
exports.exec = (cmds) ->
next = ->
if cmds.length > 0
cmd = cmds.shift()
console.log "Running command: #{cmd}"
childproc.exec cmd, (err, stdout, stderr) ->
if err? then console.log err
if stdout? then console.log stdout
if stderr? then console.log stderr
next()
else
console.log "Done executing commands."
console.log "Running the follows commands:"
console.log cmds
next()
You can use it like this:
require('./cmds').exec ['grunt coffee', 'nodeunit test/tls-config.js']
EDIT: as pointed out, this doesn't actually return the output or allow you to use the result of the commands in a Node program. One other idea for that is to use LiveScript backcalls. http://livescript.net/