Smoothie Charts real-time high speed data - javascript

I've got a high-speed data output from a C++ script, an int is produced every 800 microseconds(~1.6kHz), I've played around with this running with websocketd to stream it live to a smoothie chart on a web server on the same machine. I've found the best way to make the chart as smooth as possible is to output an array of 100 datapoints to the websocket. Inside my smoothie javascript, it splits the array and then adds the data with arbitrarily spread X values between the last array and this array.
'
conn.onmessage = function(evt) {
$("#log").text("Connected");
var arrZ = evt.data.split(',');
newTime = new Date().getTime();
var timeInterval = (newTime - oldTime)*0.01
for (i=0;i<100;i++){
timeSeries.append(oldTime, arrZ[i]);
oldTime += timeInterval;
}
oldTime = new Date().getTime();};
'
The data plot is not fantastic but works. Is there any other(faster) way - architecture wise - to get this data on to smoothiecharts?
SmoothieCharts Charting high speed data
Thanks,

The problem is that underneath, you're using TCP/IP packets. And with just a few bits of data on every update, these packets aren't immediately full. Your OS has the reasonable expectation that waiting a bit will improve bandwidth, as it can send the same amount of raw data with fewer packet headers.
However, this backfires for you. You care about latency, not data. The easiest solution is to stuff dummy data in your packets. Each value, once formatted and padded should be ~1500 bytes.

Related

Do we get all the data with createMediaStreamSource in webaudio?

I am using webaudio with javascript, and this simple example (to be used with google-chrome),
https://www-fourier.ujf-grenoble.fr/~faure/enseignement/javascript/code/web_audio/ex_microphone_to_array/
the data are collected from the microphone into an array, in real time.
Then we compare the true time (t1) with the time spent by the data (t2) and they differ by a fixed ratio t2/t1 = 1.4.
Remark: here, true time t1 means the duration time measured by the clock,i.e. obtained by the function Date().getTime();, whereas
time t2 = N*Dt where N is the number of data obtained from the microphone and Dt=1/(Sample rate) = 1/44100 sec. is the time between two data.
My question is: does it mean that the sample data rate is not 44100Hz but 30700Hz*2 (i.e. with two channels)?
or they are some repetitions within the data?
Another related question please: is there a way to check that during such a real time acquisition process, we have not lost any data?
From a quick glance at your test code, you are using an AnalyserNode to determine t2, and you call the function F3() via rAF. This happens about every 16.6 ms or 732 samples (at 44.1 kHz). But you increment t2 by N = 1024 frames each time. Hence your t2 value is about 1.4 times larger than the actual number of frames. (Which is what you're actually getting!)
If you really want to measure how many samples you've received you have to do synchronously in the audio graph so use either a ScriptProcessorNode or an AudioWorklet to count how many samples of data have been processed. You can then increment t2 by the correct amount. This should match your t1 values more closely. But note that the clock that drives the t1 value is very likely different from the audio clock that drives the audio system. They will drift over time, although the drift is probably pretty small as long as you don't run this for days at a time.

Recorder.js calculate and offset recording for latency

I'm using Recorder.js to record audio from Google Chrome desktop and mobile browsers. In my specific use case I need to record exactly 3 seconds of audio, starting and ending at a specific time.
Now I know that when recording audio, your soundcard cannot work in realtime due to hardware delays, so there is always a memory buffer which allows you to keep up recording without hearing jumps/stutters.
Recorder.js allows you to configure the bufferLen variable exactly for this, while sampleRate is taken automatically from the audio context object. Here is a simplified version of how it works:
var context = new AudioContext();
var recorder;
navigator.getUserMedia({audio: true}, function(stream) {
recorder = new Recorder(context.createMediaStreamSource(stream), {
bufferLen: 4096
});
});
function recordLoop() {
recorder.record();
window.setTimeout(function () {
recorder.stop();
}, 3000);
}
The issue i'm facing is that record() does not offset for the buffer latency and neither does stop(). So instead of getting a three second sound, it's 2.97 seconds and the start is cut off.
This means my recordings don't start in the same place, and also when I loop them, the loops are different lengths depending on your device latency!!
There are two potentially solutions I see here:
Adjust Recorder.js code to offset the buffer automatically against your start/stop times (maybe add new startSync/stopSync functions)
Calculate the latency and create two offset timers to start and stop Recorder.js at the correct points in time.
I'm trying solution 2, because solution 1 requires knowledge of buffer arrays which I don't have :( I believe the calculation for latency is:
var bufferSize = 4096;
var sampleRate = 44100
var latency = (bufferSize / sampleRate) * 2; // 0.18575963718820862 secs
However when I run these calculations in a real test I get:
var duration = 2.972154195011338 secs
var latency = 0.18575963718820862 secs
var total = duration + latency // 3.1579138321995464 secs
Something isn't right, it doesn't make 3 seconds and it's beginning to confuse me now! I've created a working fork of Recorder.js demo with a log:
http://kmturley.github.io/Recorderjs/
Any help would be greatly appreciated. Thanks!
I'm a bit confused by your concern for the latency. Yes, it's true that the minimum possible latency is going to be the related to the length of the buffer but there are many other latencies involved. In any case, the latency has nothing to do with the recording duration, which seems to me to be what your question is about.
If you want to record an exactly 3 second long buffer at 44100 that is 44100*3=132,300 samples. The buffer size is 4096 samples and the system is only going to record an even multiple of that number. Given that the closest you are going to get is to record either 32 or 33 complete buffers. This gives either 131072 (2.97 seconds) or 135168 (3.065 seconds) samples.
You have a couple options here.
Choose a buffer length that evenly divides the sample rate. e.g. 11025. You can then record exactly 12 buffers.
Record slightly longer than the 3.0 seconds you need and then throw the extra 2868 samples away.

best way to sync data with html5 video

I am building an application that takes data from an android app and replays it in a browser. The android app basically allows the user to record a video and while it is recording it logs data every 100ms such as gps position, speed and accelerometer readings to a database. So i want the user to be able to play the video back in their browser and have charts, google map etc show a realtime representation of the data synced to the video. I have already achieved this functionality but it's far from perfect and I can't help thinking there must be a better way. What I am doing at the moment is getting all of the data from the database ordered by datetime ascending and outputting it as a json encoded array. Here is the process I am doing in pseudo code:
Use video event listener to find the current datetime of video
do a while loop from the current item in the data array
For each iteration check whether the datetime for that row is less than the current datetime from the video
If it is then update the dials from the data
Increment array key
Here is my code:
var points = <?php echo json_encode($gps); ?>;
var start_time = <?php echo $gps[0]->milli; ?>;
var current_time = start_time;
$(document).ready(function()
{
top_speed = 240;
min_angle = -210;
max_angle = 30;
total_angle = 0 - ((0-max_angle)+min_angle);
multiplier = top_speed / total_angle;
speed_i=0;
video.addEventListener('timeupdate', function() {
current_time = start_time + parseInt((video.currentTime * 1000).toFixed(0));
while(typeof points[speed_i] !== 'undefined' && current_time > points[speed_i].milli)
{
newpos = new google.maps.LatLng(points[speed_i].latitude, points[speed_i].longitude);
marker.setPosition(newpos);
map.setCenter(newpos);
angle = min_angle + (points[speed_i].speed * multiplier);
$("#needle").rotate({
animateTo : angle,
center: [13,11],
duration: 100
});
speed_i++;
}
}
});
Here are the issues I seem to have encountered:
- Have to load thousands of rows into json array which can't be very good for permorance
- Have to do while loop on every video call back - again can't be very good for performance
- Playback is always a bit behind
Can anyone think of any ways this can be improved or a better way completely to do it?
There are a few reasons why this may be running slowly. First, the timeupdate event only runs about every 250ms. So, if you're going to refresh at that rate, dandavis is right and you don't need that much data. But if you want animation that's that smooth, I suggest using requestAnimationFrame to update every 16ms or so.
As it is, if you update every 250ms, you're cycling through 2 or 3 data points and updating the map and needle three times in a row, which is unnecessary.
I recommend looking into Popcorn.js, which is built exactly for this kind of thing and will take care of this for you. It will also handle seeking or playing backwards. You'll want to pre-process the data so each point has a start time and an end time in the video.
There are also some things you can do to make the data transfer more efficient. Take out any extra properties that you don't need on every point. You can store each data point as an array, so the property names don't have to be included in your JSON blob, and then you can clean that up with a few lines of JS code on the client side.
Finally, separate your data file from your script. Save it as a static JSON file (maybe even gzipped if your server configuration can handle it) and fetch it with XMLHttpRequest. That way, you can at least display the page sooner while waiting for the code to download. Better yet, look into using a JSON streaming tool like Oboe.js to start displaying data points even before the whole file is loaded.

Which format is returned from the fft with WebAudioAPI

I visualized an audiofile with WebAudioAPI and with Dancer.js. All works well but the visualizations looks very different. Can anybody help me to find out why it looks so different?
The Web-Audio-API code (fft.php, fft.js)
The dancer code (plugins/dancer.fft.js, js/playerFFT.js, fft.php)
The visualization for WebAudioAPI is on:
http://multimediatechnology.at/~fhs32640/sem6/WebAudio/fft.html
For Dancer is on
http://multimediatechnology.at/~fhs32640/sem6/Dancer/fft.php
The difference is in how the volumes at the frequencies are 'found'. Your code uses the analyser, which takes the values and also does some smoothing, so your graph looks nice. Dancer uses a scriptprocessor. The scriptprocessor fires a callback every time a certain sample length has gone through, and it passes that sample to e.inputBuffer. Then it just draws that 'raw' data, no smoothing applied.
var
buffers = [],
channels = e.inputBuffer.numberOfChannels,
resolution = SAMPLE_SIZE / channels,
sum = function (prev, curr) {
return prev[i] + curr[i];
}, i;
for (i = channels; i--;) {
buffers.push(e.inputBuffer.getChannelData(i));
}
for (i = 0; i < resolution; i++) {
this.signal[i] = channels > 1 ? buffers.reduce(sum) / channels : buffers[0][i];
}
this.fft.forward(this.signal);
this.dancer.trigger('update');
This is the code that Dancer uses to get the sound strength at the frequencies.
(this can be found in adapterWebAudio.js).
Because one is simply using the native frequency data provided by the Web Audio API using analyser.getByteFrequencyData().
The other doing its own calculation by using a ScriptProcessorNode and then when that node's onaudioprocess event fires, they take the channel data from the input buffer and convert that to a frequency domain spectra by performing a forward transform on it and then calculating the Discrete Fourier Transform of the signal with the Fast Fourier Transform algorithm.
idbehold's answer is partially correct (smoothing is getting applied), but a bigger issue is that the Web Audio code is using getByteFrequencyData instead of getFloatFrequencyData. The "byte" version does processing to maximize the byte's range - it spreads minDb to maxDb across the 0-255 byte range.

Load large dataset into crossfilter/dc.js

I built a crossfilter with several dimensions and groups to display the data visually using dc.js. The data visualized is bike trip data, and each trip will be loaded in. Right now, there's over 750,000 pieces of data. The JSON file I'm using is 70 mb large, and will only need to grow as I receive more data in the months to come.
So my question is, how can I make the data more lean so it can scale well? Right now it is taking approximately 15 seconds to load on my internet connection, but I'm worried that it will take too long once I have too much data. Also, I've tried (unsuccessfully) to get a progress bar/spinner to display while the data loads, but I'm unsuccessful.
The columns I need for the data are start_date, start_time, usertype, gender, tripduration, meters, age. I have shortened these fields in my JSON to start_date, start_time, u, g, dur, m, age so the file is smaller. On the crossfilter there is a line chart at the top showing the total # of trips per day. Below that there are row charts for the day of week (calculated from the data), month (also calculated), and pie charts for usertype, gender, and age. Below that there are two bar charts for the start_time (rounded down to the hour) and tripduration (rounded up to the minute).
The project is on GitHub: https://github.com/shaunjacobsen/divvy_explorer (the dataset is in data2.json). I tried to create a jsfiddle but it is not working (likely due to the data, even gathering only 1,000 rows and loading it into the HTML with <pre> tags): http://jsfiddle.net/QLCS2/
Ideally it would function so that only the data for the top chart would load in first: this would load quickly since it's just a count of data by day. However, once it gets down into the other charts it needs progressively more data to drill down into finer details. Any ideas on how to get this to function?
I'd recommend shortening all of your field names in the JSON to 1 character (including "start_date" and "start_time"). That should help a little bit. Also, make sure that compression is turned on on your server. That way the data sent to the browser will be automatically compressed in transit, which should speed things up a ton if it's not already turned on.
For better responsiveness, I'd also recommend first setting up your Crossfilter (empty), all your dimensions and groups, and all your dc.js charts, then using Crossfilter.add() to add more data into your Crossfilter in chunks. The easiest way to do this is to divide your data up into bite-sized chunks (a few MBs each) and load them serially. So if you are using d3.json, then start the next file load in the callback of the previous file load. This results in a bunch of nested callbacks, which is a bit nasty, but should allow the user interface to be responsive while the data is loading.
Lastly, with this much data I believe you will start running into performance issues in the browser, not just while loading the data. I suspect you are already seeing this and that the 15 second pause you are seeing is at least partially in the browser. You can check by profiling in your browser's developer tools. To address this, you'll want to profile and identify performance bottlenecks, then try to optimize those. Also - be sure to test on slower computers if they are in your audience.
Consider my class design. It doesn't match yours but it illustrates my points.
public class MyDataModel
{
public List<MyDatum> Data { get; set; }
}
public class MyDatum
{
public long StartDate { get; set; }
public long EndDate { get; set; }
public int Duration { get; set; }
public string Title { get; set; }
}
The start and end dates are Unix timestamps and the duration is in seconds.
Serializes to:
"{"Data":
[{"StartDate":1441256019,"EndDate":1441257181,
"Duration":451,"Title":"Rad is a cool word."}, ...]}"
One row of datum is 92 chars.
Let's start compressing!
Convert dates and times to base 60 strings.
Store everything in an array of an array of strings.
public class MyDataModel
{
public List<List<string>> Data { get; set; }
}
Serializes to:
"{"Data":[["1pCSrd","1pCTD1","7V","Rad is a cool word."],...]}"
One row of datum is now 47 chars.
moment.js is a good library for working with dates and time. It has functions built in to unpack the base 60 format.
Working with an array of arrays will make your code less readable so add comments to document the code.
Load just the most recent 90 days. Zoom to 30 days. When the user drags the brush on the range chart left start fetching more data in chunks of 90 days until the user stops dragging. Add the data to the existing crossfilter using the add method.
As you add more and more data you will notice that your charts get less and less responsive. That is because you have rendered hundreds or even thousands of elements in your svg. The browser is getting crushed. Use the d3 quantize function to group data points into buckets. Reduce the displayed data to 50 buckets.
Quantizing is worth the effort and is the only way you can create a scalable graph with a continuously growing dataset.
Your other option is to abandon the range chart and group the data month over month, day over day, and hour over hour. Then add a date range picker. Since your data would be grouped by month, day, and hour you'll find that even if you rode your bike every hour of the day you'd never have a result set larger than 8766 rows.
I've observed similar issues with data (working in enterprise company), I found couple of ideas worth trying.
your data have regular structure, so you can put keys in first row, and only data in following rows - imitating CSV (header first, data next)
Date Time can be changed to epoch number (and you can move start of epoch to 01/01/2015 and calculate when received
use oboe.js when getting response from server (http://oboejs.com/), as data-set will be large, consider using oboe.drop during load
update visualization with JavaScript timer
timer sample
var datacnt=0;
var timerId=setInterval(function () {
// body...
d3.select("#count-data-current").text(datacnt);
//update visualization should go here, something like dc.redrawAll()...
},300);
oboe("relative-or-absolute path to your data(ajax)")
.node('CNT',function (count,path) {
// body...
d3.select("#count-data-all").text("Expecting " + count + " records");
return oboe.drop;
})
.node('data.*', function (record, path) {
// body...
datacnt++;
return oboe.drop;
})
.node('done', function (item, path) {
// body...
d3.select("#progress-data").text("all data loaded");
clearTimeout(timerId);
d3.select("#count-data-current").text(datacnt);
});
data sample
{"CNT":107498,
"keys": "DATACENTER","FQDN","VALUE","CONSISTENCY_RESULT","FIRST_REC_DATE","LAST_REC_DATE","ACTIVE","OBJECT_ID","OBJECT_TYPE","CONSISTENCY_MESSAGE","ID_PARAMETER"],
"data": [[22,202,"4.9.416.2",0,1449655898,1453867824,-1,"","",0,45],[22,570,"4.9.416.2",0,1449655912,1453867884,-1,"","",0,45],[14,377,"2.102.453.0",-1,1449654863,1468208273,-1,"","",0,45],[14,406,"2.102.453.0",-1,1449654943,1468208477,-1,"","",0,45],[22,202,"10.2.293.0",0,1449655898,1453867824,-1,"","",0,8],[22,381,"10.2.293.0",0,1449655906,1453867875,-1,"","",0,8],[22,570,"10.2.293.0",0,1449655912,1453867884,-1,"","",0,8],[22,381,"1.80",0,1449655906,1453867875,-1,"","",0,41],[22,570,"1.80",0,1449655912,1453867885,-1,"","",0,41],[22,202,"4",0,1449655898,1453867824,-1,"","",0,60],[22,381,"4",0,1449655906,1453867875,-1,"","",0,60],[22,570,"4",0,1449655913,1453867885,-1,"","",0,60],[22,202,"A20",0,1449655898,1453867824,-1,"","",0,52],[22,381,"A20",0,1449655906,1453867875,-1,"","",0,52],[22,570,"A20",0,1449655912,1453867884,-1,"","",0,52],[22,202,"20140201",2,1449655898,1453867824,-1,"","",0,40],[22,381,"20140201",2,1449655906,1453867875,-1,"","",0,40],[22,570,"20140201",2,1449655912,1453867884,-1,"","",0,40],[22,202,"16",-4,1449655898,1453867824,-1,"","",0,58],[22,381,"16",-4,1449655906,1453867875,-1,"","",0,58],[22,570,"16",-4,1449655913,1453867885,-1,"","",0,58],[22,202,"512",0,1449655898,1453867824,-1,"","",0,57],[22,381,"512",0,1449655906,1453867875,-1,"","",0,57],[22,570,"512",0,1449655913,1453867885,-1,"","",0,57],[22,930,"I32",0,1449656143,1461122271,-1,"","",0,66],[22,930,"20140803",-4,1449656143,1461122271,-1,"","",0,64],[14,1359,"10.2.340.19",0,1449655203,1468209257,-1,"","",0,131],[14,567,"10.2.340.19",0,1449655185,1468209111,-1,"","",0,131],[22,930,"4.9.416.0",-1,1449656143,1461122271,-1,"","",0,131],[14,1359,"10.2.293.0",0,1449655203,1468209258,-1,"","",0,13],[14,567,"10.2.293.0",0,1449655185,1468209112,-1,"","",0,13],[22,930,"4.9.288.0",-1,1449656143,1461122271,-1,"","",0,13],[22,930,"4",0,1449656143,1461122271,-1,"","",0,76],[22,930,"96",0,1449656143,1461122271,-1,"","",0,77],[22,930,"4",0,1449656143,1461122271,-1,"","",0,74],[22,930,"VMware ESXi 5.1.0 build-2323236",0,1449656143,1461122271,-1,"","",0,17],[21,616,"A20",0,1449073850,1449073850,-1,"","",0,135],[21,616,"4",0,1449073850,1449073850,-1,"","",0,139],[21,616,"12",0,1449073850,1449073850,-1,"","",0,138],[21,616,"4",0,1449073850,1449073850,-1,"","",0,140],[21,616,"2",0,1449073850,1449073850,-1,"","",0,136],[21,616,"512",0,1449073850,1449073850,-1,"","",0,141],[21,616,"Microsoft Windows Server 2012 R2 Datacenter",0,1449073850,1449073850,-1,"","",0,109],[21,616,"4.4.5.100",0,1449073850,1449073850,-1,"","",0,97],[21,616,"3.2.7895.0",-1,1449073850,1449073850,-1,"","",0,56],[9,2029,"10.7.220.6",-4,1470362743,1478315637,1,"vmnic0","",1,8],[9,1918,"10.7.220.6",-4,1470362728,1478315616,1,"vmnic3","",1,8],[9,1918,"10.7.220.6",-4,1470362727,1478315616,1,"vmnic2","",1,8],[9,1918,"10.7.220.6",-4,1470362727,1478315615,1,"vmnic1","",1,8],[9,1918,"10.7.220.6",-4,1470362727,1478315615,1,"vmnic0","",1,8],[14,205,"934.5.45.0-1vmw",-50,1465996556,1468209226,-1,"","",0,47],[14,1155,"934.5.45.0-1vmw",-50,1465996090,1468208653,-1,"","",0,14],[14,963,"934.5.45.0-1vmw",-50,1465995972,1468208526,-1,"","",0,14],
"done" : true}
sample of changing keys first to full array of objects
//function to convert main data to array of objects
function convertToArrayOfObjects(data) {
var keys = data.shift(),
i = 0, k = 0,
obj = null,
output = [];
for (i = 0; i < data.length; i++) {
obj = {};
for (k = 0; k < keys.length; k++) {
obj[keys[k]] = data[i][k];
}
output.push(obj);
}
return output;
}
this function above works with a bit modified version of data
sample here
[["ID1","ID2","TEXT1","STATE1","DATE1","DATE2","STATE2","TEXT2","TEXT3","ID3"],
[14,377,"2.102.453.0",-1,1449654863,1468208273,-1,"","",0,45],
[14,406,"2.102.453.0",-1,1449654943,1468208477,-1,"","",0,45],
[22,202,"10.2.293.0",0,1449655898,1453867824,-1,"","",0,8],
[22,381,"10.2.293.0",0,1449655906,1453867875,-1,"","",0,8],
[22,570,"10.2.293.0",0,1449655912,1453867884,-1,"","",0,8],
[22,381,"1.80",0,1449655906,1453867875,-1,"","",0,41],
[22,570,"1.80",0,1449655912,1453867885,-1,"","",0,41],
[22,202,"4",0,1449655898,1453867824,-1,"","",0,60],
[22,381,"4",0,1449655906,1453867875,-1,"","",0,60],
[22,570,"4",0,1449655913,1453867885,-1,"","",0,60],
[22,202,"A20",0,1449655898,1453867824,-1,"","",0,52]]
Also consider using memcached https://memcached.org/ or redis https://redis.io/ to cache data at server side, as per data size, redis might get you further

Categories

Resources