Javascript send string to python-script - javascript

I have an (html/js) application running on my localhost. I'd like to send some information (string) to a python script running in the background, which will just print the string, or save it as a .txt-file.
It seems websockets will do the job, but I cannot get around this (in my eyes a simple problem..?). All examples or libraries aims at specific usages or are depricated in the meanwhile. Also, maybe someone can point me to another principle like REST.?
I'm not really into web/ip/internetthings, but I need this to let the webpage initiate some python programs.
Any tips on how to achieve this?

I am doing something similar to have a web server (nodejs) control my raspberrypi (python).
I suggest you simply spawn your python script by your js server and make them communicate via stdin/stdout.
For example with nodejs:
var spawn = require('child_process').spawn;
var child = spawn(
'python3',
['./py/pi-ctrl.py']
);
var child_emit = function (message) {
child.stdin.write(message+"\n");
}
Then your js can just 'emit' anything to your python script, which listens to stdin:
while True:
line = input().split()

Related

How would I control/access ubuntu server services via a node.js web admin panel?

I don't know where to start with this question!
Basically, I would like to build a control panel, web based, using node.js as the back end. This would run on my raspberry pi running Ubuntu Server.
The idea is that I can gain statistics (CPU, Temperature, Disk Space etc) and set up basic features like MongoDB database, hosting etc.
Now, this is obviously possible just like many web panels out there, however, is this possible with node.js.
I suppose the first question would be, can I start/stop services (reboot server, start MongoDB etc) via Node.Js, and secondly, can I get info from this to display in my web panel?
I tried Google but for the first time, I don't even know what question to ask :)
I found Node JS examples of running command line commands, however, when passing in simple commands, like "node -v" it crashed, so I am not sure this is the method used by other commercial server web panels.
Any advice would be great :)
Thanks
You should try this tutorial: https://stackabuse.com/executing-shell-commands-with-node-js/
From node doc for child_process:
https://nodejs.org/api/child_process.html
const { exec } = require('child_process');
exec('"/path/to/test file/test.sh" arg1 arg2');
// Double quotes are used so that the space in the path is not interpreted as
// a delimiter of multiple arguments.
exec('echo "The \\$HOME variable is $HOME"');
// The $HOME variable is escaped in the first instance, but not in the second.

How to run CasperJS script without using PHP exec or shell_exec

I have a CasperJS script in which its results needs to be captured in PHP. For that I had to use PHP's exec() or shell_exec() functions. But recently I got to know that enabling command line execution on server is risky and not safe. So how am I supposed to run my CasperJS script without using either of those functions in PHP?
PS:
To be more precise, how to use CasperJS on web browser, like processing a web form with PHP and return an output derived from the CasperJS without touching exec or shell_exec to execute it.
CasperJS is built on top of PhantomJS (or SlimerJS). It can use all the features PhantomJS provides which includes the Web Server Module. The idea would be to run a single CasperJS instance which your PHP script can query through HTTP.
You can start a CasperJS script at system startup or through a cron job (and restarting when it crashes). You can then query it through local http requests.
CasperJS script:
var webserver = require('webserver');
var server = webserver.create();
var service = server.listen(8080, function(request, response) {
var casper = require('casper').create({
exitOnError: false,
onError: function(msg, backtrace){
response.statusCode = 500;
response.write('ERROR: ' + msg + "\n" + JSON.stringify(backtrace));
response.close();
}
});
casper.start(yourURL, function(){
// TODO: do something
response.statusCode = 200;
response.write('something');
response.close();
}).run(function(){
// this function is necessary to prevent exiting the whole script
});
});
And in PHP you can then use something like file_get_contents() to retrieve the response:
$result = file_get_contents("http://localhost:8080/");
Things to look out for:
Configure your machine in such a way that the port PhantomJS is running on is not accessible from outside.
If you're using a cron job approach, write a pid file to make sure not to start another instance.
The web server module only supports 10 concurrent requests. If your system exceeds those, you will need to create a pool of multiple CasperJS (PhantomJS) processes.
The pages of a single CasperJS (PhantomJS) process all share the same session just like in any normal browser. If you want to isolate them from one another, then you need to run a CasperJS (PhantomJS) process for every request.
You do the usual little square dance.
Run your PHP thing, capturing input, producing a job configuration.
Put that job in your DB, get it out with CRON.
Process it with whatever, put the result in different DB table, or in the filesystem.
Mark your job as 'done', so your user-facing PHP can poll for that status periodically, and present the user with the end result when it's done.
If you're doing this because you know what attack vector an exec() exposes in your app, and can't live with that -- that's alright.
But if you're doing this because you're scared of "not even sure what", then don't. You'll make it worse.
Good luck.
:)

Proper way to monitor/control a server remotely over http in realtime

On my client (a phone with a browser) I want to see the stats of the server CPU,RAM & HDD and gather info from various logs.
I'm using ajax polling.
On the client every 5 sec (setInterval) I call a PHP file:
scan a folder containing N logs
read the last line of each log
convert that to JSON
Problems:
Open new connection every 5 sec.
Multiple AJAX calls.
Request headers (they are also data and so consume bandwidth)
Response headers (^)
Use PHP to read files every 5 sec. even if nothing changed.
The final JSON data is less than 5 KB, but I send it every 5 sec, and there are the headers and new connection every time, so basically every 5 sec., I have to send 5-10 KB to get 5 KB which are 10-20 KB.
Those are 60 sec / 5 sec = 12 new connections per minute and about 15 MB per hour of traffic if I leave the app open.
Lets say I have 100 users that I let monitor / control my server that would be around 1.5 GB outgoing traffic in one hour.
Not to mention that the PHP server is reading multiple files 100 times every 5 sec.
I need something that on the server reads the last lines of those logs every 5 sec and maybe writes them to a file, then I want to push this data to the client only if it's changed.
SSE (server sent events) with PHP
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
while(true){
echo "id: ".time()."\ndata: ".ReadTheLogs()."\n\n";
ob_flush();
flush();
sleep(1);
}
In this case after the connection is established with the first user
the connection keeps open (PHP is not made for that) and so I save some space (request headers,response headers). This work on my server bu most server don't allow to keep the connection open for long time.
Also with multiple users I read the log multiple times.(slowing down my old server)
And I can't control the server ... I would need to use ajax to send a command...
I need WebSockets!!!
node.js and websockets
using node.js, from what i understand, i can do all this without consuming alot
of resources and bandwich. The connection keeps open so no unnecessary headers, i can recieve and send data.it handles multiple users very well.
And this is where i need your help.
the node.js server should in background update, and store the logs data every 5 sec if the files are modified.OR should that do the operating system with (iwatch,dnotify...)
the data should be pushed only if changed.
the reading of the logs should be happen only one time after 5 sec ... so not triggered by each user.
this is the first example i have found.....and modified..
var ws=require("nodejs-websocket");
var server=ws.createServer(function(conn){
var data=read(whereToStoreTheLogs);
conn.sendText(data)// send the logs data to the user
//on first connection.
setTimeout(checkLogs,5000);
/*
here i need to continuosly check if the logs are changed.
but if i use setInterval(checkLogs,5000) or setTimeout
every user invokes a new timer and so having lots of timers on the server
can i do that in background?
*/
conn.on("text",function(str){
doStuff(str); // various commands to control the server.
})
conn.on("close",function(code,reason){
console.log("Connection closed")
})
}).listen(8001);
var checkLogs=function(){
var data=read(whereToStoreTheLogs);
if(data!=oldData){
conn.sendText(data)
}
setTimeout(checkLogs,5000);
}
the above script would be the notification server, but i also need to find a solution to store somwhere the info of those multiple logs and do that everytime something is changed, in the background.
How would you do to keep the bandwich low but also the server resources.
How would you do?
EDIT
Btw. is there a way to stream this data simultaneosly to all the clinets?
EDIT
About the logs: i also want to be able to scale the time dilatation between updates... i mean if i read the logs of ffmpeg i ned the update every sec if possible... but when no conversion is active.. i need to get the basic machine info every 5min maybe ... and so on...
GOALS:
1. performant way to read & store somewhere the logs data (only if clinets connected...[mysql,file, it's possible to store this info inside the ram(with node.js??)]).
2. performant way to stream the data to the various clients (simultanously).
3. be able to send commands to the server.. (bidirectional)
4. using web languages (js,php...), lunix commands( something that is easy to implement on multiple machines).. free software if needed.
best approach would be:
read the logs, based on current activity, to the system memory and stream simultaneously and continuosly, with an already open connection, to the various clients with webSockets.
i'don't know anything that could be faster.
UPDATE
The node.js server is up and running, using the http://einaros.github.io/ws/ webSocketServer implementation, as it appears to be the fastest one.
I wrote with the help of #HeadCode the following code to handle properly the client situation & to keep the process as low as possible. checking various things inside the broadcast loop. Now the pushing & the client handling is at a good point.
var
wss=new (require('ws').Server)({port:8080}),
isBusy,
logs,
clients,
i,
checkLogs=function(){
if(wss.clients&&(clients=wss.clients.length)){
isBusy||(logs=readLogs()/*,isBusy=true*/);
if(logs){
i=0;
while(i<clients){
wss.clients[i++].send(logs)
}
}
}
};
setInterval(checkLogs,2000);
But atm i'm using a really bad way to parse the logs.. (nodejs->httpRequest->php).. lol. After some googling i found out that i totally could stream the output of linux software directly to the nodejs app ... i didn't checked... but maybe that would be the best way to do it. node.js also has a filesystem api where icould read the logs. linux has it's own filesystem api.
the readLogs()(can be async) function is still something i'm not happy with.
nodejs filesystem?
linuxSoftware->nodejs output implementation
linux filesystem api.
keep in mind that i need to scan various folders for logs and then parse somehow the outputted data, and this every 2 seconds.
ps.: i adde isBusy to the server variables in case the logReading sytem is async.
EDIT
Answer is not complete.
Missing:
A performant way to read,parse and store the logs somewhere (linux filesystem api, or nodejs api, so the i store directly into system memory)
An explaination if it's possible to stream data directly to multiple users .
apparently nodejs loops trough the clients and so (i think) sending multiple times the data.
btw is it possible/worth to close the node server if there are no clients and restart on new connections on the apache side. (ex: if i connect to the apache hosted html file a script launches the nodejs server again). doing so would further reduce the memory leaking???right?
EDIT
After some experimenting with websockets (some videos are in the comments) i learned some new stuff. Raspberry PI has the possibility to use some CPU DMA channels to to high frequency stuff like PWM... i need to somehow understand how that works.
When using sensors and stuff like that i should store everything inside the RAM, nodejs already does that?? (in a variable inside the script)
websocket remains the best choice as it's basically easely accessible from any device now, simply using a browser.
I haven't used nodejs-websocket, but it looks like it will accept an http connection and do the upgrade as well as creating the server. If all you care about receiving is text/json then I suppose that would be fine, but it seems to me you might want to serve a web page along with it.
Here is a way to use express and socket.io to achieve what you're asking about:
var express = require('express');
var app = express();
var http = require('http').Server(app);
var io = require('socket.io')(http);
app.use(express.static(__dirname + '/'));
app.get('/', function(req, res){
res.sendfile('index.html');
});
io.on('connection', function(socket){
// This is where we should emit the cached values of everything
// that has been collected so far so this user doesn't have to
// wait for a changed value on the monitored host to see
// what is going on.
// This code is based on something I wrote for myself so it's not
// going to do anything for you as is. You'll have to implement
// your own caching mechanism.
for (var stat in cache) {
if (cache.hasOwnProperty(stat)) {
socket.emit('data', JSON.stringify(cache[stat]));
}
}
});
http.listen(3000, function(){
console.log('listening on *:3000');
});
(function checkLogs(){
var data=read(whereToStoreTheLogs);
if(data!=oldData){
io.emit(data)
}
setTimeout(checkLogs,5000);
})();
Of course, the checkLogs function has to be fleshed out by you. I have only cut and pasted it in here for context. The call to the emit function of the io object will send the message out to all connected users but the checkLogs function will only fire once (and then keep calling itself), not every time someone connects.
In your index.html page you can have something like this. It should be included in the html page at the bottom, just before the closing body tag.
<script src="/path/to/socket.io.js"></script>
<script>
// Set up the websocket for receiving updates from the server
var socket = io();
socket.on('data', function(msg){
// Do something with your message here, such as using javascript
// to display it in an appropriate spot on the page.
document.getElementById("content").innerHTML = msg;
});
</script>
By the way, check out the Nodejs documentation for a variety of built-in methods for checking system resources (https://nodejs.org/api/os.html).
Here's also a solution more in keeping with what it appears you want. Use this for your html page:
<!DOCTYPE HTML>
<html>
<head>
<meta charset="utf-8">
<title>WS example</title>
</head>
<body>
<script>
var connection;
window.addEventListener("load", function () {
connection = new WebSocket("ws://"+window.location.hostname+":8001")
connection.onopen = function () {
console.log("Connection opened")
}
connection.onclose = function () {
console.log("Connection closed")
}
connection.onerror = function () {
console.error("Connection error")
}
connection.onmessage = function (event) {
var div = document.createElement("div")
div.textContent = event.data
document.body.appendChild(div)
}
});
</script>
</body>
</html>
And use this as your web socket server code, recently tweaked to use the 'tail' module (as found in this post: How to do `tail -f logfile.txt`-like processing in node.js?), which you will have to install using npm (Note: tail makes use of fs.watch, which is not guaranteed to work the same everywhere):
var ws = require("nodejs-websocket")
var os = require('os');
Tail = require('tail').Tail;
tail = new Tail('./testlog.txt');
var server = ws.createServer(function (conn) {
conn.on("text", function (str) {
console.log("Received " + str);
});
conn.on("close", function (code, reason) {
console.log("Connection closed");
});
}).listen(8001);
setInterval(function(){ checkLoad(); }, 5000);
function broadcast(mesg) {
server.connections.forEach(function (conn) {
conn.sendText(mesg)
})
}
var load = '';
function checkLoad(){
var new_load = os.loadavg().toString();
if (new_load === 'load'){
return;
}
load = new_load;
broadcast(load);
}
tail.on("line", function(data) {
broadcast(data);
});
Obviously this is very basic and you will have to change it for your needs.
I had made a similar implementation recently using Munin . Munin is a wonderful server monitoring tool, open source too which also provides a REST API. There several plugins available for your needs monitoring CPU, HDD and RAM usage of your server.
You need to build a push notification server. All clients who are listening, will then get a push notification when new data is updated. See this answer for more information: PHP - Push Notifications
As to how you would update the data, I'd suggest using OS-based tools to trigger a PHP script (command line) that will generate an "push" the json file out to any client currently listening. Any new client logging on to "listen" will get served the current json available, until it's updated.
This way you're not subject to 100 users using 100 connections and how much ever bandwidth to poll your server every 5 seconds, and only get updated when they need to know there's an update.
How about a service that reads all the log info (via IPMI, Nagios or whatever) and creates the output files on some schedule. Then anyone that wants to connect can just read this output rather than hammering the server logs. Essentially have one hit on the server logs then everyone else just reads a web page.
This could be implemented pretty easily.
BTW: Nagios has a v nice free edition
Answering just these bits of your question:
performant way to stream the data to the various clients (simultanously).
be able to send commands to the server.. (bidirectional)
using web languages (js,php...), lunix commands( something that is easy to implement on multiple machines).. free software if needed.
I'll recommend the Bayeux protocol as made simple by the CometD project. There are implementations in a variety of languages and it's really easy to use in its simplest form.
Meteor is broadly similar. It's an application development framework rather than a family of libraries, but it solves the same problems.
Some suggestions:
Munin for charts
NetSNMP (used by Munin, but you can also use Bash and Cron to build traps that send SMS texts on alerts)
Pingdom for remote alerts about how well the server is responding to ping and HTTP checks. It can SMS text you or call a phone, as well as have call escalation rules.

Multi-OS text-based database with an engine for Python and JavaScript

This is probably a far fetch, but maybe someone knows a good solution for it.
Introduction
I'm currently making an application in Python with the new wxPython 2.9, which has the new html2 library, which will inherit the native browser of each OS (Safari/IE/Firefox/Konquerer/~), which is really great.
Goal/Purpose
What I'm currently aiming for is to process large chunks of data and analyze it super fast with Python (currently about 110.000 entries, turning out in about 1.500.000 to 2.250.000 results in a dictionary). This works very fast and is also dynamic, so it will only do that first big fetch once (takes about 2-4 seconds only) and afterwards just keeps listening if new data gets created on the disc.
So far, so good. Now with the new wxPython html2 library I'm making the new GUI. It's mainly made to display pages, so what I have made now is a model in a /html/ folder (with HTML/CSS/jQuery) and it will dynamically look for a JSON files (jQuery fetching), which is practically a complete dump of the massive dictionaries that the Python script is making in the background (daemon) in a parallel thread.
JavaScript doesn't seem to have issues with reading a big JSON file and because everything is (and stays) local it doesn't really incur slowness or anything. Also the CPU and memory usage is very low.
Conclusion
But here comes the bottleneck. From the JavaScript point of view, the handling of the big JSON file is not really a joyride. I have todo a lot of searching and matching for all the data I need to get, and also creates a lot of redundant re-looping through the same big chunks of entries.
Question
I'm wondering if there is any kind of "engine" that is implemented for both Python and JavaScript that can handle jSON files, or maybe other text-based files as a database. Meaning you can really have a MySQL-like structure (not meant by full extend of course), where you at least can define a table structure which hold the data and you do reads/write/updates on methodically.
The app I'm currently developing is multi-OS based (at least Ubuntu, OS X and Windows XP+). I also really don't want to create more clutter than using wxPython (for distribution/dependency sakes) to use an extension database (like I could run a MySQL server on localhost), so purely keep it inside my Python distro's folder. This is also to prevent writing massive code (checks) checking if the user has already got servers/databases in use that might collide with what my app I would then install.
Final Notes
I'm kind of aiming to build some kind of API myself too for future projects to make this way of development standard for my Python scripts that need a GUI. Now that wxPython can more easily embrace the modern browser technologies; there seems to be no limit anymore to building super fast, dynamic and responsive graphical Python apps.
Why not just stick the data into a SQLite database and then have both Python and Javascript hit that? See also Convert JSON to SQLite in Python - How to map json keys to database columns properly?
Sqlite is included with in all modern versions of Python. You'll have to check out the SQLite website for its limitations
Kind of got something figured out, through running a CGI HTTP server and letting Python in there fetch SQLite queries for JavaScript's AJAX calls. Here's a small demo (only tested on OS X):
Folder structure
main.py
cgi/index.py
data/
html/index.html
html/scripts/jquery.js
html/scripts/main.js
html/styles/main.css
Python server (main.py)
### CGI Server ###
import CGIHTTPServer
import BaseHTTPServer
class Handler(CGIHTTPServer.CGIHTTPRequestHandler):
cgi_directories = ['/cgi']
# Mute the messages in the shell
def log_message(self, format, *args):
return
httpd = BaseHTTPServer.HTTPServer(('', 61350), Handler)
#httpd.serve_forever()
thread = threading.Thread(name='CGIHTTPServer', target=httpd.serve_forever)
thread.setDaemon(True)
thread.start()
#### TEST SQLLite ####
# Make the database file if it doesn't exist
if not os.path.exists('data/sqlite.db'):
db_file = open('data/sqlite.db', 'w')
db_file.write('')
db_file.close()
import sqlite3
conn = sqlite3.connect('data/sqlite.db')
cursor = conn.cursor()
cursor.execute('CREATE TABLE info(id INTEGER UNIQUE PRIMARY KEY, name VARCHAR(75), folder_name VARCHAR(75))')
cursor.execute('INSERT INTO info VALUES(null, "something1", "something1_name")')
cursor.execute('INSERT INTO info VALUES(null, "something2", "something1_name")')
conn.commit()
Python SQLite processor (cgi/index.py) (demo is purely for SELECT, needs more dynamic)
#!/usr/bin/env python
import cgi
import json
import sqlite3
print 'Content-Type: text/json\n\n'
### Fetch GET-data ###
form = cgi.FieldStorage()
obj = {}
### SQLite fetching ###
query = form.getvalue('query', 'ERROR')
output = ''
if query == 'ERROR':
output = 'WARNING! No query was given!'
else:
# WARNING: The path probably needs `../data/sqlite.db` if PYTHONPATH is not defined
conn = sqlite3.connect('data/sqlite.db')
cursor = conn.cursor()
cursor.execute(query)
# TODO: Add functionality/detect if it's a SELECT, INSERT/UPDATE (then we need to conn.commit() )
result = cursor.fetchall()
if len(result) > 0:
output = []
for row in result:
buff = []
for entry in row:
buff.append(entry)
output.append(buff)
else:
output = 'WARNING! No results found'
obj = output
### Return the data in jSON (map) format for JavaScript
print json.dumps(obj)
JavaScript (html/scripts/main.js)
'use strict';
$(document).ready(function() {
// JSON data read test
var query = 'SELECT * FROM test';
$.ajax({
url: 'http://127.0.0.1:61350/cgi/index.py?query=' + query,
success: function(data) {
lg(data);
},
error: function() {
lg('Something went wrong while fetching the query.');
}
});
});
And that wraps it up. The console output in the browser is;
[
[1, "something1", "something1_name"],
[2, "something2", "something2_name"]
]
With this methodology you could let Python and JavaScript read/write in the same database, while Python keeps doing its system tasks (daemon) and update the database entries, whilst JavaScript can keep checking for new data.
This method could probably also add room for listeners and other means of communication between the both.
The main.py will instantly stop running because of the daemon. This is because of my wxPython script after it that keeps the daemon (server) alive until the application stops. If someone else wants to use this code for the future; just make sure the server-code runs after the SQLite initiation and unquote httpd.serve_forever() to keep it running.

In Node.js, how do I make one server call a function on another server?

Let's say I have 2 web servers. Both of them just installed Node.js and is running a website (using Express). Pretty basic stuff.
How can Server-A tell Server-B to execute a function? (inside node.js)
Preferably...is there a npm module for this that makes it really easy for me?
How can Server-A tell Server-B to
execute a function?
You can use one of the RPC modules, for example dnode.
Check out Wildcard API, it's an RPC implementation for JavaScript.
It works between the browser and a Node.js server and also works between multiple Node.js processes:
// Node.js process 1
const express = require('express');
const wildcardMiddleware = require('#wildcard-api/server/express');
const {endpoints} = require('#wildcard-api/server');
endpoints.hello = async function() {
const msg = 'Hello from process 1';
return msg;
};
const app = express();
app.use(wildcardMiddleware());
app.listen(3000);
// Node.js process 2
const wildcard = require('#wildcard-api/client');
const {endpoints} = require('#wildcard-api/client');
wildcard.serverUrl = 'http://localhost:3000';
(async () => {
const msg = await endpoints.hello();
console.log(msg); // Prints "Hello from process 1"
})();
You can browse the code of the example here.
You most likely want something like a JSON-RPC module for Node. After some quick searching, here is a JSON-RPC middleware module for Connect that would be perfect to use with Express.
Also, this one looks promising too.
Update : The library I've made & linked below, isn't maintained currently. Please check out the other answers on this thread.
What you need is called RPC. It is possible to build your own, but depending on the features you need, it can be time consuming.
Given the amount of time I had to invest, I'd recommend finding a decent library that suits your purpose, instead of hand rolling. My usecase required additional complex features like selective RPC calls, for which I couldn't find anything lightweight enough, so had to roll my own.
Here it is https://github.com/DhavalW/octopus.

Categories

Resources