I have a program written in c++ that reads values from this board. Anyways that part is not important. What I have is data that is constantly changing and I will like to graph that data. I was hoping to use a web browser to display the data since there are so many open source graphs and charts out there written in JavaScript. So my problem is to send data to the browser from my c++ program
I already investigated and UDP is not available in browsers yet so I will have to use TCP. TCP websockets are not that fast and I was thinking about using html5 localstorage instead. By that I mean have my c++ program write to the database on localStorage then javascript will wait for the value of that variable to exist and invent some sort of protocol that will make that work. Local storage is really fast for example :
<script type="text/javascript">
var counter = 0;
window.onload = function () {
function Test() {
counter++;
localStorage.p = counter + ""; // perform write
var read = localStorage.p; // perform read
if (read == "5000")
alert((new Date() - now)); // shows 45
else
Test(); // loop again
}
var now = new Date();
Test();
}
</script>
that script takes 54 milliseconds and it reads AND writes 5000 times! That means that instead of creating a plug-in for the browser next time I will just implement some sort of protocol that will enable me to exchange information using the localStorage. For example I could have the browser waiting for the variable x to exist. Once it exist I then creates a variable y by the browser notifying the c++ program that it is ready to receive data and so on. localStorage is just a sqlite database located on C:\Users[USER]\AppData\Local\Google\Chrome\User Data\Default\Local Storage
I haven't seen anyone online that uses this approach. Maybe it is too dangerous and Sqlite cannot handle multiple threads that good and I will be wasting time creating this program.
So should I start implementing this protocol? Should I use websockets? Or should I give it a try to https://stackoverflow.com/a/10219977/637142 ?
I would go with node.js as middleware from your C++ to the browser, instead of using directly websocket (been there done that) go with http://socket.io/ that will make your life much easier :)
Related
Imagine website, where user can generate content via js.
For example.
User clicks button
It requests our api (not user's api)
Api returns object with specific fields.
We show select with user's defined options generated by user's code or some calculated result based on data we sent.
The idea is to give user an ability to edit visible content (using our structures, we know beforehand which fields in returned object do what things).
First solution "developed" in 5 minutes.
Users clicks button
It send all required data as context to our api.
We fetch from database user's defined code
// here is the code which we write (not user) and we know this code is safe
const APP_CONTEXT = parseInput(); // this can be parameters from command line
const ourLibrary = require('ourLibrary');
// APP_CONTEXT is variable which contains data from frontend. We control data inside APP_CONTEXT, user can not write to it
// here is user defined code
const someVar = APP_CONTEXT['fieldDescribedInOurDocumentation'];
const anotherVar = APP_CONTEXT['anotherFieldFromDocumentation'];
ourLibrary.sendToFrontend(someVar + anotherVar);
In this very simple example once user clicked on button, we sent api request to our api, user's code has been executed, we show result of execution. ourLibrary abstract the way the handling is completed.
The main problem as I think is the security. I think about using restricted nodejs process. No network access, no file system access.
Is it possible to deny any import/require in nodejs process? I want to let user only call all builtin js function (Math.min, Math.max, new Date(), +, -), declare functions and so on. So it will work like a sophisticated calculator. And also we should have an ability to send it back to frontend. For example, via rabbitmq + nodejs + websockets. We can use simple console.log if former is the problem.
Some possible solution (not secure, of course) using nodejs interpreter. We execute interpreter every time when action is required.
const APP_CONTEXT = parseInput();
const ourLibrary = require('ourLibrary');
const usersCode = getUsersCode();
eval(usersCode);
Inside usersCode they use ourLibrary.sendToFrontend to produce the result. But this solution allows user to use any builtin nodejs functions, like const fs = require('fs'). Of course access will be restricted using linux system (selinux or similar) but can I configure/setup nodejs to run as simple js interpreter? May be there is some other js interpreter exists which is safe to use? Safe means: only arithmetic, Date function, Math functions and so on. No filesystem access, no network access.
I want to use a linux device like a BananaPi with a socketcan-compatible can-controller to connect to a automotive can-bus and show its data in realtime on a webpage, which should be hosted on the Pi.
The data should be listed as hex-values and visualized via graphs (the different signals, for example the current speed).
After some research I discovered node-can and I could get managed to show the can-messages as a list on a webpage. But I noticed, that the messages come with a quite huge delay (~2 secs) when there is a huge busload (I sent can messages in a 1 ms period). The same delay occurs, if I use the following minimalistic example:
var can = require('socketcan');
var channel = can.createRawChannel("can1", true);
channel.addListener("onMessage", function(msg) { console.log(msg); } );
channel.start();
I am absolutely new in this topic but I think, that nodejs isn't the best choice to realize this project?
Are there any other (better) methods to realize such a system?
I could imagine something like a C-backend, for example based on candump (with this program no delay occurs at the same busload), and a frontend realized with javascript, html and css. But I have no idea how to get those different single programs together. Could you give me a keyword so I have a starting point for further research (websocket?!)?
I also thought about writing the can frames in a sql database and grab them from the database for the webpage-gui but I have no idea, if/how this works and if this is fast enough....
Thanks in advance!
I have a page on my site where you can play a game. When you die in the game, a function called "playerIsDead();" is called, then the game closes (The game screens are prompt();s and confirm();s shown one after another, so by "the game closes", I mean the page stops showing popup messages.). The playerIsDead(); Function:
var playerIsDead = function() {
confirm(deathMsg);
confirm(deathMsg2);
confirm(deathMsg3);
};
I want to make the function increase a variable, totalDeathCount, by one each time, like this:
var playerIsDead = function() {
confirm(deathMsg);
confirm(deathMsg2);
confirm(deathMsg3);
totalDeathCount++;
};
So, my question is, how can I store totalDeathCount to the server, so I can display it on the page? I don't want it to show how many deaths have happened locally, I want it to show how many worldwide deaths have occurred.
Any help would be much appreciated!
You could use a MySQL database and store the value using PHP. Or you could simply use javascript's localStorage to store them locally on the device.
var newDeathCount = localStorage.getItem('totalDeathCount') + 1;
localStorage.setItem('totalDeathCount',newDeathCount);
Then you basically stored how many times they have died. Not in a web server but this is an alternate solution.
1) Decide how and create something to persist the information. E.g. write it to a file, store it in a database, etc.
2) Write a simple php script to store the data. (or implement it as part of your existing php backend if available)
3) Make an ajax post request to send the data to the script.
For implementation details use google and your imagination. ;)
I'm a Java developer learning JavaScript and Google Apps Script simultaneously. Being the newbie I learned the syntax of JavaScript, not how it actually worked and I happily hacked away in Google Apps Script and wrote code sequentially and synchronous, just like Java. All my code resembles this: (grossly simplified to show what I mean)
function doStuff() {
var url = 'https://myCompany/api/query?term<term&search';
var json = getJsonFromAPI(url);
Logger.log(json);
}
function getJsonFromAPI(url) {
var response = UrlFetchApp.fetch(url);
var json = JSON.parse(response);
return json;
}
And it works! It works just fine! If I didn't keep on studying JavaScript, I'd say it works like a clockwork. But JavaScript isn't a clockwork, it's gloriously asynchronous and from what I understand, this should not work at all, it would "compile", but logging the json variable should log undefined, but it logs the JSON with no problem.
NOTE:
The code is written and executed in the Google Sheet's script editor.
Why is this?
While Google Apps Script implements a subset of ECMAScript 5, there's nothing forcing it to be asynchronous.
While it is true that JavaScript's major power is its asynchronous nature, the Google developers appear to have given that up in favor of a simpler, more straightforward API.
UrlFetchApp methods are synchronous. They return an HttpResponse object, and they do not take a callback. That, apparently, is an API decision.
Please note that this hasn't really changed since the introduction of V8 runtime for google app scripts.
While we are on the latest and greatest version of ECMAScript, running a Promise.all(func1, func2) I can see that the code in the second function is not executed until the first one is completed.
Also, there is still no setTimeout() global function to use in order to branch the order of execution. Nor do any of the APIs provide callback functions or promise-like results. Seems like the going philosophy in GAS is to make everything synchronous.
I'm guessing from Google's point of view, that parallel processing two tasks (for example, that simply had Utilities.sleep(3000)) would require multiple threads to run in the server cpu, which may not be manageable and may be easy to abuse.
Whereas parallel processing on the client or other companies server (e.g., Node.js) is up to that developer or user. (If they don't scale well it's not Google's problem)
However there are some things that use parallelism
UrlFetchApp.fetchAll
UrlFetchApp.fetchAll() will asynchronously fetch many urls. Although this is not what you're truly looking for, fetching urls is a major reason to seek parallel processing.
I'm guessing Google is reasoning this is ok since fetchall is using a web client and its own resources are already protected by quota.
FirebaseApp getAllData
Firebase I have found is very fast compared to using a spreadsheet for data storage. You can get many things from the database at once using FirebaseApp's getAllData:
function myFunction() {
var baseUrl = "https://samplechat.firebaseio-demo.com/";
var secret = "rl42VVo4jRX8dND7G2xoI";
var database = FirebaseApp.getDatabaseByUrl(baseUrl, secret);
// paths of 3 different user profiles
var path1 = "users/jack";
var path2 = "users/bob";
var path3 = "users/jeane";
Logger.log(database.getAllData([path1, path2, path3]));
}
HtmlService - IFrame mode
HtmlService - IFrame mode allows full multi-tasking by going out to client script where promises are truly supported and making parallel calls back into the server. You can initiate this process from the server, but since all the parallel tasks' results are returned in the client, it's unclear how to get them back to the server. You could make another server call and send the results, but I'm thinking the goal would be to get them back to the script that called HtmlService in the first place, unless you go with a beginRequest and endRequest type architecture.
tanaikech/RunAll
This is a library for running the concurrent processing using only native Google Apps Script (GAS). This library claims full support via a RunAll.Do(workers) method.
I'll update my answer if I find any other tricks.
On my client (a phone with a browser) I want to see the stats of the server CPU,RAM & HDD and gather info from various logs.
I'm using ajax polling.
On the client every 5 sec (setInterval) I call a PHP file:
scan a folder containing N logs
read the last line of each log
convert that to JSON
Problems:
Open new connection every 5 sec.
Multiple AJAX calls.
Request headers (they are also data and so consume bandwidth)
Response headers (^)
Use PHP to read files every 5 sec. even if nothing changed.
The final JSON data is less than 5 KB, but I send it every 5 sec, and there are the headers and new connection every time, so basically every 5 sec., I have to send 5-10 KB to get 5 KB which are 10-20 KB.
Those are 60 sec / 5 sec = 12 new connections per minute and about 15 MB per hour of traffic if I leave the app open.
Lets say I have 100 users that I let monitor / control my server that would be around 1.5 GB outgoing traffic in one hour.
Not to mention that the PHP server is reading multiple files 100 times every 5 sec.
I need something that on the server reads the last lines of those logs every 5 sec and maybe writes them to a file, then I want to push this data to the client only if it's changed.
SSE (server sent events) with PHP
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
while(true){
echo "id: ".time()."\ndata: ".ReadTheLogs()."\n\n";
ob_flush();
flush();
sleep(1);
}
In this case after the connection is established with the first user
the connection keeps open (PHP is not made for that) and so I save some space (request headers,response headers). This work on my server bu most server don't allow to keep the connection open for long time.
Also with multiple users I read the log multiple times.(slowing down my old server)
And I can't control the server ... I would need to use ajax to send a command...
I need WebSockets!!!
node.js and websockets
using node.js, from what i understand, i can do all this without consuming alot
of resources and bandwich. The connection keeps open so no unnecessary headers, i can recieve and send data.it handles multiple users very well.
And this is where i need your help.
the node.js server should in background update, and store the logs data every 5 sec if the files are modified.OR should that do the operating system with (iwatch,dnotify...)
the data should be pushed only if changed.
the reading of the logs should be happen only one time after 5 sec ... so not triggered by each user.
this is the first example i have found.....and modified..
var ws=require("nodejs-websocket");
var server=ws.createServer(function(conn){
var data=read(whereToStoreTheLogs);
conn.sendText(data)// send the logs data to the user
//on first connection.
setTimeout(checkLogs,5000);
/*
here i need to continuosly check if the logs are changed.
but if i use setInterval(checkLogs,5000) or setTimeout
every user invokes a new timer and so having lots of timers on the server
can i do that in background?
*/
conn.on("text",function(str){
doStuff(str); // various commands to control the server.
})
conn.on("close",function(code,reason){
console.log("Connection closed")
})
}).listen(8001);
var checkLogs=function(){
var data=read(whereToStoreTheLogs);
if(data!=oldData){
conn.sendText(data)
}
setTimeout(checkLogs,5000);
}
the above script would be the notification server, but i also need to find a solution to store somwhere the info of those multiple logs and do that everytime something is changed, in the background.
How would you do to keep the bandwich low but also the server resources.
How would you do?
EDIT
Btw. is there a way to stream this data simultaneosly to all the clinets?
EDIT
About the logs: i also want to be able to scale the time dilatation between updates... i mean if i read the logs of ffmpeg i ned the update every sec if possible... but when no conversion is active.. i need to get the basic machine info every 5min maybe ... and so on...
GOALS:
1. performant way to read & store somewhere the logs data (only if clinets connected...[mysql,file, it's possible to store this info inside the ram(with node.js??)]).
2. performant way to stream the data to the various clients (simultanously).
3. be able to send commands to the server.. (bidirectional)
4. using web languages (js,php...), lunix commands( something that is easy to implement on multiple machines).. free software if needed.
best approach would be:
read the logs, based on current activity, to the system memory and stream simultaneously and continuosly, with an already open connection, to the various clients with webSockets.
i'don't know anything that could be faster.
UPDATE
The node.js server is up and running, using the http://einaros.github.io/ws/ webSocketServer implementation, as it appears to be the fastest one.
I wrote with the help of #HeadCode the following code to handle properly the client situation & to keep the process as low as possible. checking various things inside the broadcast loop. Now the pushing & the client handling is at a good point.
var
wss=new (require('ws').Server)({port:8080}),
isBusy,
logs,
clients,
i,
checkLogs=function(){
if(wss.clients&&(clients=wss.clients.length)){
isBusy||(logs=readLogs()/*,isBusy=true*/);
if(logs){
i=0;
while(i<clients){
wss.clients[i++].send(logs)
}
}
}
};
setInterval(checkLogs,2000);
But atm i'm using a really bad way to parse the logs.. (nodejs->httpRequest->php).. lol. After some googling i found out that i totally could stream the output of linux software directly to the nodejs app ... i didn't checked... but maybe that would be the best way to do it. node.js also has a filesystem api where icould read the logs. linux has it's own filesystem api.
the readLogs()(can be async) function is still something i'm not happy with.
nodejs filesystem?
linuxSoftware->nodejs output implementation
linux filesystem api.
keep in mind that i need to scan various folders for logs and then parse somehow the outputted data, and this every 2 seconds.
ps.: i adde isBusy to the server variables in case the logReading sytem is async.
EDIT
Answer is not complete.
Missing:
A performant way to read,parse and store the logs somewhere (linux filesystem api, or nodejs api, so the i store directly into system memory)
An explaination if it's possible to stream data directly to multiple users .
apparently nodejs loops trough the clients and so (i think) sending multiple times the data.
btw is it possible/worth to close the node server if there are no clients and restart on new connections on the apache side. (ex: if i connect to the apache hosted html file a script launches the nodejs server again). doing so would further reduce the memory leaking???right?
EDIT
After some experimenting with websockets (some videos are in the comments) i learned some new stuff. Raspberry PI has the possibility to use some CPU DMA channels to to high frequency stuff like PWM... i need to somehow understand how that works.
When using sensors and stuff like that i should store everything inside the RAM, nodejs already does that?? (in a variable inside the script)
websocket remains the best choice as it's basically easely accessible from any device now, simply using a browser.
I haven't used nodejs-websocket, but it looks like it will accept an http connection and do the upgrade as well as creating the server. If all you care about receiving is text/json then I suppose that would be fine, but it seems to me you might want to serve a web page along with it.
Here is a way to use express and socket.io to achieve what you're asking about:
var express = require('express');
var app = express();
var http = require('http').Server(app);
var io = require('socket.io')(http);
app.use(express.static(__dirname + '/'));
app.get('/', function(req, res){
res.sendfile('index.html');
});
io.on('connection', function(socket){
// This is where we should emit the cached values of everything
// that has been collected so far so this user doesn't have to
// wait for a changed value on the monitored host to see
// what is going on.
// This code is based on something I wrote for myself so it's not
// going to do anything for you as is. You'll have to implement
// your own caching mechanism.
for (var stat in cache) {
if (cache.hasOwnProperty(stat)) {
socket.emit('data', JSON.stringify(cache[stat]));
}
}
});
http.listen(3000, function(){
console.log('listening on *:3000');
});
(function checkLogs(){
var data=read(whereToStoreTheLogs);
if(data!=oldData){
io.emit(data)
}
setTimeout(checkLogs,5000);
})();
Of course, the checkLogs function has to be fleshed out by you. I have only cut and pasted it in here for context. The call to the emit function of the io object will send the message out to all connected users but the checkLogs function will only fire once (and then keep calling itself), not every time someone connects.
In your index.html page you can have something like this. It should be included in the html page at the bottom, just before the closing body tag.
<script src="/path/to/socket.io.js"></script>
<script>
// Set up the websocket for receiving updates from the server
var socket = io();
socket.on('data', function(msg){
// Do something with your message here, such as using javascript
// to display it in an appropriate spot on the page.
document.getElementById("content").innerHTML = msg;
});
</script>
By the way, check out the Nodejs documentation for a variety of built-in methods for checking system resources (https://nodejs.org/api/os.html).
Here's also a solution more in keeping with what it appears you want. Use this for your html page:
<!DOCTYPE HTML>
<html>
<head>
<meta charset="utf-8">
<title>WS example</title>
</head>
<body>
<script>
var connection;
window.addEventListener("load", function () {
connection = new WebSocket("ws://"+window.location.hostname+":8001")
connection.onopen = function () {
console.log("Connection opened")
}
connection.onclose = function () {
console.log("Connection closed")
}
connection.onerror = function () {
console.error("Connection error")
}
connection.onmessage = function (event) {
var div = document.createElement("div")
div.textContent = event.data
document.body.appendChild(div)
}
});
</script>
</body>
</html>
And use this as your web socket server code, recently tweaked to use the 'tail' module (as found in this post: How to do `tail -f logfile.txt`-like processing in node.js?), which you will have to install using npm (Note: tail makes use of fs.watch, which is not guaranteed to work the same everywhere):
var ws = require("nodejs-websocket")
var os = require('os');
Tail = require('tail').Tail;
tail = new Tail('./testlog.txt');
var server = ws.createServer(function (conn) {
conn.on("text", function (str) {
console.log("Received " + str);
});
conn.on("close", function (code, reason) {
console.log("Connection closed");
});
}).listen(8001);
setInterval(function(){ checkLoad(); }, 5000);
function broadcast(mesg) {
server.connections.forEach(function (conn) {
conn.sendText(mesg)
})
}
var load = '';
function checkLoad(){
var new_load = os.loadavg().toString();
if (new_load === 'load'){
return;
}
load = new_load;
broadcast(load);
}
tail.on("line", function(data) {
broadcast(data);
});
Obviously this is very basic and you will have to change it for your needs.
I had made a similar implementation recently using Munin . Munin is a wonderful server monitoring tool, open source too which also provides a REST API. There several plugins available for your needs monitoring CPU, HDD and RAM usage of your server.
You need to build a push notification server. All clients who are listening, will then get a push notification when new data is updated. See this answer for more information: PHP - Push Notifications
As to how you would update the data, I'd suggest using OS-based tools to trigger a PHP script (command line) that will generate an "push" the json file out to any client currently listening. Any new client logging on to "listen" will get served the current json available, until it's updated.
This way you're not subject to 100 users using 100 connections and how much ever bandwidth to poll your server every 5 seconds, and only get updated when they need to know there's an update.
How about a service that reads all the log info (via IPMI, Nagios or whatever) and creates the output files on some schedule. Then anyone that wants to connect can just read this output rather than hammering the server logs. Essentially have one hit on the server logs then everyone else just reads a web page.
This could be implemented pretty easily.
BTW: Nagios has a v nice free edition
Answering just these bits of your question:
performant way to stream the data to the various clients (simultanously).
be able to send commands to the server.. (bidirectional)
using web languages (js,php...), lunix commands( something that is easy to implement on multiple machines).. free software if needed.
I'll recommend the Bayeux protocol as made simple by the CometD project. There are implementations in a variety of languages and it's really easy to use in its simplest form.
Meteor is broadly similar. It's an application development framework rather than a family of libraries, but it solves the same problems.
Some suggestions:
Munin for charts
NetSNMP (used by Munin, but you can also use Bash and Cron to build traps that send SMS texts on alerts)
Pingdom for remote alerts about how well the server is responding to ping and HTTP checks. It can SMS text you or call a phone, as well as have call escalation rules.