Show live data from can bus on webpage with visualization - javascript

I want to use a linux device like a BananaPi with a socketcan-compatible can-controller to connect to a automotive can-bus and show its data in realtime on a webpage, which should be hosted on the Pi.
The data should be listed as hex-values and visualized via graphs (the different signals, for example the current speed).
After some research I discovered node-can and I could get managed to show the can-messages as a list on a webpage. But I noticed, that the messages come with a quite huge delay (~2 secs) when there is a huge busload (I sent can messages in a 1 ms period). The same delay occurs, if I use the following minimalistic example:
var can = require('socketcan');
var channel = can.createRawChannel("can1", true);
channel.addListener("onMessage", function(msg) { console.log(msg); } );
channel.start();
I am absolutely new in this topic but I think, that nodejs isn't the best choice to realize this project?
Are there any other (better) methods to realize such a system?
I could imagine something like a C-backend, for example based on candump (with this program no delay occurs at the same busload), and a frontend realized with javascript, html and css. But I have no idea how to get those different single programs together. Could you give me a keyword so I have a starting point for further research (websocket?!)?
I also thought about writing the can frames in a sql database and grab them from the database for the webpage-gui but I have no idea, if/how this works and if this is fast enough....
Thanks in advance!

Related

How to increase speed of application with lots of RSS

Good Morning all...
I have web application with lots of RSS feeds. The flow is below
Fetch the RSS from respective site(using PHP)
Store it in DB (MySQL)
Read it and display it to users browser.
Fetching is done at every one hour interval and store to DB.
Then whoever request, display these feeds to users browser. (Read from DB and display it.)
The reading process is not that fast. In other words if user have 20 feeds on same page then loading 5 articles for each feed. It is not that fast and currently not giving good User experience.
I am running on 8GB RAM VPS server, Technology - PHP, MYSQL, MOOTOOLS, javascripts
Then to make it fast i tried using Flat File - Read feeds from respective sites and write it to Feed File. (separate file for each feed.)
Then read the feed file and display it on user browsers. In this scenario it was slower than reading from DB.
So now i have no option.. and no clue what i can do to increase the speed of my website.
If any expert have any suggestion please let me know.
Regards,
Mona
I would not consider myself an expert, but I could at least give you somethings to try:
First it's a good thing you're showing the rss feeds from your own database, this should protect you if any of the rss-sources fails due to problems at the rss-provider.
Nonetheless, I suggest you to move the loading part of the rss feeds to a separate file, which runs server-sided (and make it into a 'cronjob'). This makes sure that a user can never be bothered with the rebuilding of your data-source. This cronjob can then be called each hour to refresh your database.
The next step would be to find out where the process slows down the most, are there slow queries? Or is there just some sluggish code in your script?
To narrow down the causes, I really suggest you to install the XDebug extension (there are ready dlls for Windows here: http://pecl.php.net/package/Xdebug) and add the following lines to your php.ini:
[XDebug]
zend_extension = "C:\xampp\php\ext\php_xdebug.dll"
xdebug.profiler_append = 0
xdebug.profiler_enable = 0
xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_dir = "C:\xdebug"
xdebug.profiler_output_name = "cachegrind.out.%t-%s"
xdebug.remote_enable = 0
xdebug.remote_handler = "dbgp"
xdebug.remote_host = "127.0.0.1"
xdebug.trace_output_dir = "C:\xdebug"
After installing, adding ?XDEBUG_PROFILE to your url (see: http://www.xdebug.org/docs/profiler) will generate a file which you can examine with WinCacheGrind (http://sourceforge.net/projects/wincachegrind/). This program you can narrow down the execution time per function call.
I hope this helps you out :)
PS: Make sure to disable, or even better, not install XDebug on your production environment, since XDebug slows down your scripts...

Meteor.jd, realtime latency with complex pusblish/subscribe rules?

I'm playing with realtime whiteboards with meteor. My first attempt was working very well, if you open 2 browsers and draw in one of them, the other one updates in a few milliseconds ( http://pen.meteor.com/stackoverflow )
Now, my second project, is to make an infinite realtime whiteboard. The main thing that changes now, is that all lines are grouped by zones, and the viewer only subscribe to the lines in the visible zones. And now there is a dealy of 5 seconds (!) when you do something in one browser to see it happen in the other one ( http://carve.meteor.com/love ).
I've tried to add indexes in the mongo database for the fields determining the zones.
I've tried updating the Collection only for a full line (and not each time I push a new point like i my first project).
I've tried adding a timeout not to subscribe too often when scrolling or zooming the board.
Nothing changes, always a 5 seconds delay.
I don't have this delay when working locally.
Here is the piece of code responsible for subscribing to the lines you the visible area :
subscribeTimeout=false;
Deps.autorun(function () {
var vT=Session.get("visible_tiles");
var board_key=Session.get("board_key");
if (subscribeTimeout) Meteor.clearTimeout(subscribeTimeout);
subscribeTimeout=Meteor.setTimeout(subscribeLines, 500);
});
function subscribeLines() {
subscribeTimeout=false;
var vT=Session.get("visible_tiles");
console.log("SUBSCRIBE");
Meteor.subscribe("board_lines", Session.get("board_key"),vT.left,vT.right,vT.top,vT.bottom, function() {
console.log("subscribe board_lines "+Session.get("board_key"));
});
}
I've been a SysAdmin for 15 years. Without running the code, it sounds like an imposed limitation of the meteor.com server. They probably put in delays on the resources so everyone gets a fair share. I'd publish to another server like heroku for an easy deploy or manually to another server like linode or my favorite Joyent. Alternatively you could try and contact meteor.com directly and ask them if/how they limit resource usage.
Since the code runs fast/instantly locally, you should see sub-second response times from a good server over a good network.

WebRTC, JS, node.js App working locally through remote server but not different connections

I have recently started to look at inter-browsers communications, and got paricularly interested in webRTC. I am trying at the moment to build a file transfer through a Data Channel with the beginner oriented library provided here :
https://github.com/muaz-khan/WebRTC-Experiment/tree/master/DataChannel .
My application is visible here : https://shirase-ttt.jit.su/Dropzone.html
It does a basic file transfer on drop of a file. The problem is, it works between 2 tabs of your own browser (Chrome tested only, but different locations). But as soon as you try it between 2 different internet connections/locations, it stop working. The channel is established but the file isn't sent. I have no idea where I should start looking, the code seems fine, as it works locally (test yourself, see steps below) but I can't swear I haven't made a mistake, would anybody help?
Steps for testing :
Open https://shirase-ttt.jit.su/Dropzone.html on 2 tabs of your browser / 2 browsers.
Tab1 creates a channel name and clicks Connect.
Tab2 enters the same channel name and clicks join.
After a few seconds, you should see all the Channel info in the console. From this point on, you can drop a small file in the box of any client, and see it downloading through the console of the second client. I use a ~100kb image, which takes about 15seconds to dl. It is quite impressive.
I then tested with a friend remotely. After establishing the channel. You see the file being sent, but nothing is received.
The code :
Client : https://github.com/xShirase/RTC-Exploring/blob/master/Dropzone.html
Server : https://github.com/xShirase/RTC-Exploring/blob/master/ttt.js
on the server, only lines 1-34 are relevant, the rest is for different works. Yes, I have tried stripping it naked. No, it doesn't change anything.
Any ideas are welcome. Thanks. I'm thinking it may be an issue with the hosting, maybe https redir messing up things? I don't know, to be honest. Which is why I write here.
Also, I have another request. The web is at the moment undergoing a revolution in many ways. we have the chance to find ourselves at the very beginning of the curve, where everything is still to do, but enough is done to have some fun. So I'd like to put up a team of people, not professionals, but who are eager to learn as much as they can and do as much as they can to push in the right direction. The point I'm personally at is : Good understanding of sockets, good scripting skills, not pro enough I guess, and lots of ideas. I wanna explore webRTC, understand it properly as it evolves, and participate to this evolution. I'm sure I'm not the only one, so to anyone interested and with similar motivations, let's learn faster by working in groups. Contact me.
Disclaimer : The second part of this post may not be in the right website, I'm not sure. But it's where it can be seen, and that's what I'm looking for. Nothing professional or any obligations of any kind, just code, test ideas, that sort of things. If anyone has issues with that part, edit, flag, downvote, or maybe talk first ;-)
Thanks.
It is NAT traversing issue. DataChannel.js used "only" STUN. Now, it is fixed because using two TURN servers, too. If it is still failing for your; try to use "your own" TURN server.
STUN = {
url: !moz ? 'stun:stun.l.google.com:19302' : 'stun:23.21.150.121'
};
TURN1 = {
url: 'turn:73922577-1368147610#108.59.80.54',
credential: 'b3f7d809d443a34b715945977907f80a'
};
TURN2 = {
url: 'turn:webrtc%40live.com#numb.viagenie.ca',
credential: 'muazkh'
};
iceServers = {
iceServers: options.iceServers || [STUN]
};
if (!moz && !options.iceServers) {
iceServers.iceServers[1] = TURN1;
iceServers.iceServers[2] = TURN2;
}

Publish data from browser app without writing my own server

I need users to be able to post data from a single page browser application (SPA) to me, but I can't put server-side code on the host.
Is there a web service that I can use for this? I looked at Amazon SQS (simple queue service) but I can't call their REST APIs from within the browser due to cross origin policy.
I favour ease of development over robustness right now, so even just receiving an email would be fine. I'm not sure that the site is even going to catch on. If it does, then I'll develop a server-side component and move hosts.
Not only there are Web Services, but nowadays there are robust systems that provide a way to server-side some logic on your applications. They are called BaaS or Backend as a Service providers, usually to provide some backbone to your front end applications.
Although they have multiple uses, I'm going to list the most common in my opinion:
For mobile applications - Instead of having to learn an API for each device you code to, you can use an standard platform to store logic and data for your application.
For prototyping - If you want to create a slick application, but you don't want to code all the backend logic for the data -less dealing with all the operations and system administration that represents-, through a BaaS provider you only need good Front End skills to code the simplest CRUD applications you can imagine. Some BaaS even allow you to bind some Reduce algorithms to calls your perform to their API.
For web applications - When PaaS (Platform as a Service) came to town to ease the job for Backend End developers in order to avoid the hassle of System Administration and Operations, it was just logic that the same was going to happen to the Backend. There are many clones that showcase the real power of this strategy.
All of this is amazing, but I have yet to mention any of them. I'm going to list the ones that I know the most and have actually used in projects. There are probably many, but as far as I know, this one have satisfied most of my news, whether it's any of the previously ones mentioned.
Parse.com
Parse's most outstanding features target mobile devices; however, nowadays Parse contains an incredible amount of API's that allows you to use it as full feature backend service for Javascript, Android and even Windows 8 applications (Windows 8 SDK was introduced a few months ago this year).
How does a Parse code looks in Javascript?
Parse works through classes and objects (ain't that beautiful?), so you first create a specific class (can be done through Javascript, REST or even the Data Browser manager) and then you add objects to specific classes.
First, add up Parse as a script tag in javascript:
<script type="text/javascript" src="http://www.parsecdn.com/js/parse-1.1.15.min.js"></script>
Then, through a given Application ID and a Javascript Key, initialize Parse.
Parse.initialize("APPLICATION_ID", "JAVASCRIPT_KEY");
From there, it's all object manipulation
var Person = Parse.Object.extend("Person"); //Person is a class *cof* uppercase *cof*
var personObject = new Person();
personObject.save({name: "John"}, {
success: function(object) {
console.log("The object with the data "+ JSON.stringify(object) + " was saved successfully.");
},
error: function(model, error) {
console.log("There was an error! The following model and error object were provided by the Server");
console.log(model);
console.log(error);
}
});
What about authentication and security?
Parse has a User based authentication system, which pretty much allows you to store a base of users that can manipulate the data. If map the data with User information, you can ensure that only a given user can manipulate specific data. Plus, in the settings of your Parse application, you can specify that no clients are allowed to create classes, to ensure innecesary calls are performed.
Did you REALLY used in a web application?
Yes, it was my tool of choice for a medium fidelity prototype.
Firebase.com
Firebase's main feature is the ability to provide Real Time to your application without all the hassle. You don't need a MeteorJS server in order to bring Push Notifications to your software. If you know Javascript, you are half way through to bring Real Time magic to your users.
How does a Firebase looks in Javascript?
Firebase works in a REST fashion, and I think they do an amazing job structuring the Glory of REST. As a good example, look at the following Resource structure in Firebase:
https://SampleChat.firebaseIO-demo.com/users/fred/name/first
You don't need to be a rocket scientist to know that you are retrieve the first name of the user "Fred", giving there's at least one -usually there should be a UUID instead of a name, but hey, it's an example, give me a break-.
In order to start using Firebase, as with Parse, add up their CDN Javascript
<script type='text/javascript' src='https://cdn.firebase.com/v0/firebase.js'></script>
Now, create a reference object that will allow you to consume the Firebase API
var myRootRef = new Firebase('https://myprojectname.firebaseIO-demo.com/');
From there, you can create a bunch of neat applications.
var USERS_LOCATION = 'https://SampleChat.firebaseIO-demo.com/users';
var userId = "Fred"; // Username
var usersRef = new Firebase(USERS_LOCATION);
usersRef.child(userId).once('value', function(snapshot) {
var exists = (snapshot.val() !== null);
if (exists) {
console.log("Username "+userId+" is part of our database");
} else {
console.log("We have no register of the username "+userId);
}
});
What about authentication and security?
You are in luck! Firebase released their Security API about two weeks ago! I have yet to explore it, but I'm sure it fills most of the gaps that allowed random people to use your reference to their own purpose.
Did you REALLY used in a web application?
Eeehm... ok, no. I used it in a Chrome Extension! It's still in process but it's going to be a Real Time chat inside a Chrome Extension. Ain't that cool? Fine. I find it cool. Anyway, you can browse more awesome examples for Firebase in their examples page.
What's the magic of these services? If you read your Dependency Injection and Mock Object Testing, at some point you can completely replace all of those services for your own through a REST Web Service provider.
Since these services were created to be used inside any application, they are CORS ready. As stated before, I have successfully used both of them from multiple domains without any issue (I'm even trying to use Firebase in a Chrome Extension, and I'm sure I will succeed soon).
Both Parse and Firebase have Data Browser managers, which means that you can see the data you are manipulating through a simple web browser. As a final disclaimer, I have no relationship with any of those services other than the face that James Taplin (Firebase Co-founder) was amazing enough to lend me some Beta access to Firebase.
You actually CAN use SQS from the browser, even without CORS, as long as you only need the browser to send messages, not receive them. Warning: this is a kludge that would make my CS professors cry.
When you perform a GET request via javascript, the browser will always perform the request, however, you'll only get access to the response if it was from the same origin (protocol, host, port). This is your ticket to ride, since messages can be posted to an SQS queue with just a GET, and who really cares about the response anyways?
Assuming you're using jquery, your queue is https://sqs.us-east-1.amazonaws.com/71717171/myqueue, and allows anyone to post a message, the following will post a message with the body "HITHERE" to the queue:
$.ajax({
url: 'https://sqs.us-east-1.amazonaws.com/71717171/myqueue' +
'?Action=SendMessage' +
'&Version=2012-11-05' +
'&MessageBody=HITHERE'
})
The'll be an error in the console saying that the request failed, but the message will show up in the queue anyways.
Have you considered JSONP? That is one way of calling cross-domain scripts from javascript without running into the same origin policy. You're going to have to set up some script somewhere to send you the data, though. Javascript just isn't up to the task.
Depending in what kind of data you want to send, and what you're going to do with it, one way of solving it would be to post the data to a Google Spreadsheet using Ajax. It's a bit tricky to accomplish though.Here is another stackoverflow question about it.
If presentation isn't that important you can just have an embedded Google Spreadsheet Form.
What about mailto:youremail#goeshere.com ? ihihi
Meantime, you can turn on some free hostings like Altervista or Heroku or somenthing else like them .. so you can connect to their server , if i remember these free services allows servers p2p, so you can create a sort of personal web services and push ajax requests as well, obviously their servers are slow for free accounts, but i think it's enought if you do not have so much users traffic, else you should turn on some better VPS or Hosting or Cloud solution.
Maybe CouchDB can provide what you're after. IrisCouch provides free CouchDB instances. Lock it down so that users can't view documents and have a sensible validation function and you've got yourself an easy RESTful place to stick your data in.

Best practice for a practical real-time app with .NET MVC4

Hello people and bots,
I'm developing the front-end web application for a system and have hit a little problem.
The whole application has to be on one page, meaning no refreshes or page changes during the flow of the major areas of the application.
It also has to work in all web browsers including IE7 and be deploy-able as a HTML5 application for tablets and mobile phones.
It's based around a user logging in, which is regulated by webforms authentication, then I need to poll or long-poll the server for updates. This would be simple if I could do a request for each part of the system, however if the system gets too big session blocking becomes a problem and it could also end up ddosing itself. So what I need to do is think of a way to send one request and build one response from that request.
My first idea was to build a response model with JSON or XML and let JavaScript regulate what needs to be updated. For example, a new comment has been made on a topic, all the clients see this update near instantly. My idea was to send something like this:
[
'd':
{
'addComment' : [{'topicId':'topic1', 'description':'haha'}, {'topicId':'topic1', 'description':'lol'}],
'addTopics' : ['topic2','topic708'],
}
]
JavaScript would then parse this and add "haha", and "lol" to the element with the id "topic1". In theory it seems quite simple to achieve, but as the system is getting bigger it seems to constantly be turning into a big mess.
My second idea was to simply have a set of client-side functions for adding and removing stuff in the DOM, And then use a JSONP technique to call this function.
var addComment = function(commentModel)
{
var topic = document.getElementById(commentModel.topicId),
comments = topic.getElementByClassName('comments')[0],
comment = document.createElement('div');
comment.innerHTML = commentModel.description;
comments.appendChild(comment);
updateUnreadComments();
}
Response:
(function(addComment){
addComment({ topicId : 'topic1', description : 'haha' })
addComment({ topicId : 'topic1', description : 'lol' })
})(addComment)
Personally I find both methods a rather annoying workaround for nothing more than simple broadcasting.
I was wondering if anyone else has ever had the same problems and if they have ever come up with a more creative solution.
I would also like to know what kind of security issues I could expect with JSONP.
All help is appreciated!
You may take a look at SignalR. Scott Hanselman also blogged about it.

Categories

Resources