I have an Arduino kit and a webserial API setup to receive the data into a html div with the id of target.
At the moment it is logging all the data into one stream (there are a few dials and switches on the Arduino).
The data looks like this...
RADI 61 RADI 62 RADI 63 RADI 64 WIND 1 WIND 0 WIND 1
...RADI being a dial value and WIND being an on / off switch.
Is there a way to separate this information into RADI and WIND chunks...ideally into separate HTML text boxes so I can manipulate that data?
Any help would be appreciated
Here is my current javascript code...
document.getElementById('connectButton').addEventListener('click', () => {
if (navigator.serial) {
connectSerial();
} else {
alert('Web Serial API not supported.');
}
});
async function connectSerial() {
const log = document.getElementById('target');
try {
const port = await navigator.serial.requestPort();
await port.open({ baudRate: 9600 });
const decoder = new TextDecoderStream();
port.readable.pipeTo(decoder.writable);
const inputStream = decoder.readable;
const reader = inputStream.getReader();
while (true) {
const { value, done } = await reader.read();
if (value) {
log.textContent += value + '\n';
}
if (done) {
console.log('[readLoop] DONE', done);
reader.releaseLock();
break;
}
}
} catch (error) {
log.innerHTML = error;
}
}
Assuming you have a couple of textboxes:
<input id='textbox1'></input>
<input id='textbox2'></input>
You can update your log references to the following:
const log1 = document.getElementById('#textbox1');
const log2 = document.getElementById('#textbox2');
Then in your loop:
if (value) {
let parse = String(value).split(' ');
if(parse[0] ?? '' == 'WIND') log1.value = parse[1];
if(parse[0] ?? '' == 'RADI') log2.value = parse[1];
}
Related
I am testing MySQL Document Store. In order to properly compare with our relational tables, I am attempting to transform our current table into a collection. There approximately 320K records in the table that I wish to export and add to a new collection. I am attempting to use the Connector/Node.js to do this. To avoid blowing it up, I am attempting to add 10K records at a time, but only the first 10K records are inserted. I have confirmed that it is the first 10K records, it is not overwriting each iteration. And those 10K records are correctly structured.
const mysqlx = require('#mysql/xdevapi');
const config = {
password: 'notMyPassword',
user: 'notMyUser',
host: 'notMyHost',
port: 33060,
schema: 'sample'
};
var mySchema;
var myCollection;
var recCollection = [];
mysqlx.getSession(config).then(session => {
mySchema = session.getSchema('sample');
mySchema.dropCollection('sample_test');
mySchema.createCollection('sample_test');
myCollection = mySchema.getCollection('sample_test');
var myTable = mySchema.getTable('sampledata');
return myTable.select('FormDataId','FormId','DateAdded','DateUpdated','Version','JSON').orderBy('FormDataId').execute();
}).then(result => {
console.log('we have a result to analyze...');
var tmp = result.fetchOne();
while(tmp !== null && tmp !== '' && tmp !== undefined){
var r = tmp;
var myRecord = {
'dateAdded': r[2],
'dateUpdated': r[3],
'version': r[4],
'formId': r[1],
'dataId': r[0],
'data': r[5]
};
recCollection.push(myRecord);
if (recCollection.length >= 10000){
console.log('inserting 10000');
try {
myCollection.add(recCollection).execute();
} catch(ex){
console.log('error: ' + ex);
}
recCollection.length = 0;
}
tmp = result.fetchOne();
}
});
It looks like an issue related to how you handle asynchronous execution. The execute() method of a CollectionAdd statement is asynchronous and it returns a Promise.
In a while loop, unless you "await" for the execution to finish, the construct is not able to handle it for you, even though the first call to the asynchronous method always goes through. That's why it only adds the first 10k documents.
You also need to be careful with APIs like createCollection() and dropCollection() because they are also asynchronous and return back a Promise similarly.
Using your own example (without looking into the specifics) it can be something like the following:
const mysqlx = require('#mysql/xdevapi');
const config = {
password: 'notMyPassword',
user: 'notMyUser',
host: 'notMyHost',
port: 33060,
schema: 'sample'
};
// without top-level await
const main = async () => {
const session = await mysqlx.getSession(config);
const mySchema = session.getSchema('sample');
await mySchema.dropCollection('sample_test');
const myCollection = await mySchema.createCollection('sample_test');
const myTable = mySchema.getTable('sampledata');
const result = await myTable.select('FormDataId', 'FormId', 'DateAdded', 'DateUpdated', 'Version', 'JSON')
.orderBy('FormDataId')
.execute();
const tmp = result.fetchOne();
while (tmp !== null && tmp !== '' && tmp !== undefined) {
let r = tmp;
let myRecord = {
'dateAdded': r[2],
'dateUpdated': r[3],
'version': r[4],
'formId': r[1],
'dataId': r[0],
'data': r[5]
};
recCollection.push(myRecord);
if (recCollection.length >= 10000){
console.log('inserting 10000');
try {
await myCollection.add(recCollection).execute();
} catch (ex) {
console.log('error: ' + ex);
}
recCollection.length = 0;
}
tmp = result.fetchOne();
}
}
main();
Disclaimer: I'm the lead developer of the MySQL X DevAPI connector for Node.js
I try to make threads is js, 1 thread will read a file and push words into an array, 2nd thread will read the array and console.log the word.
I didn't find any solutions except using worker_threads.
This is my main file:
var data = [];
var i = 0;
var finish = false;
function runWorker() {
const worker = new Worker('./worker.js', { workerData: { i, data } });
const worker2 = new Worker('./worker2.js', { workerData: { i, data, finish } });
worker.on('message', function(x) {
data.push[x.data]
if(x.finish != undefined){
finish = true
}
});
worker.on('error', function(error) {
console.log(error);
});
worker2.on('message', function(x) {
console.log(x)
});
worker2.on('error', function(error) {
console.log(error);
});
worker.on('exit', code => {
if (code !== 0) console.log(new Error(`Worker stopped with exit code ${code}`));
});
}
runWorker();
this is my worker.js file :
const { workerData, parentPort } = require('worker_threads');
var fs = require('fs');
const chalk = require('chalk');
fs.readFile('./input.txt', async function read(err, data) {
if (err) {
console.log(' |-> ' + chalk.bold.red("impossible a lire"))
}
else{
console.log(' |-> ' + chalk.blue("début lecture du fichier"))
data = data.toString().split(" ");
for(var i = 0; i<data.length; i++){
parentPort.postMessage({data:data[i]})
}
parentPort.postMessage({finish:true})
}
})
Here I read a file, and send each word to the main programme to put it into an array
this is my worker2.js file :
const { workerData, parentPort } = require('worker_threads');
let finish = false
while(!finish){
let data = workerData.data;
let i = workerData.i;
finish = workerData.finish;
console.log(data[i]);
}
Here I have an infinite loop, and if something is on the array, print it.
I have a file of name and date-of-birth-info. For each line in the file, I need to submit the data to a web form and see what result I get. I'm using Node and Puppeteer (headless), as well as readline to read the file.
The code works fine for small files, but when I run it on the full 5000 names, or even a few hundred, I end up with hundreds of headless instances of Chromium, bringing my machine to its knees and possibly creating confounding timeout errors.
I'd prefer to wait for each form submission to complete, or otherwise throttle the processing so that no more than x names are in process at once. I've tried several approaches, but none does what I want. I'm not a JS whiz at all, so there's probably questionable design going on.
Any thoughts?
const puppeteer = require('puppeteer');
const fs = require('fs');
const readline = require('readline');
const BALLOT_TRACK_URL = 'https://www.example.com/ballottracking.aspx';
const VOTER_FILE = 'MailBallotsTT.tab';
const VOTER_FILE_SMALL = 'MailBallotsTTSmall.tab';
const COUNTY = 'Example County';
checkBallot = (async ( fName, lName, dob, county ) => {
/* Initiate the Puppeteer browser */
const browser = await puppeteer.launch({headless:true });
const page = await browser.newPage();
await page.goto( BALLOT_TRACK_URL, { waitUntil: 'networkidle0' });
// fill out the form
await page.type('#ctl00_ContentPlaceHolder1_FirstNameText', fName );
await page.type('#ctl00_ContentPlaceHolder1_LastNameText', lName );
await page.type('#ctl00_ContentPlaceHolder1_DateOfBirthText', dob );
await page.type('#ctl00_ContentPlaceHolder1_CountyDropDown', county );
let pageData = await page.content();
// Extract the results from the page
try {
submitSelector = 'input[name="ctl00$ContentPlaceHolder1$RetrieveButton"]';
tableSelector = '#ctl00_ContentPlaceHolder1_ResultPanel > div > div > div > table > tbody > tr:nth-child(3) > td:nth-child(7) > div';
foundSubmitSelector = await page.waitForSelector(submitSelector, { timeout: 5000 } );
clickResult = await page.click( submitSelector );
foundTable = await page.waitForSelector(tableSelector, { timeout: 5000 } )
let data = await page.evaluate( ( theSelector ) => {
let text = document.querySelector( theSelector ).innerHTML.replaceAll('<br>', '').trim();
/* Returning an object filled with the scraped data */
return {
text
}
}, tableSelector );
return data;
} catch (error) {
return {
text: error.message
}
} finally {
browser.close();
}
});
const mainFunction = () => {
const readInterface = readline.createInterface({
input: fs.createReadStream( VOTER_FILE_SMALL ),
output: null,
console: false
});
readInterface.on('line', async(line) => {
split = line.split( '\t' );
fName = split[0];
lName = split[1];
dob = split[2];
checkResult = await checkBallot( fName, lName, dob, COUNTY );
console.log( line + '\t' + checkResult.text );
to = await new Promise(resolve => setTimeout(resolve, 5000));
});
};
mainFunction();
Here is some code that implements my suggestion in the comment. I have used a setTimeout to represent the async code that you have, but in principle the approach should be easily adaptable:
// Source file testReadFileSync.js
const fs = require( "fs" );
const data = fs.readFileSync( "./inputfile.txt", { encoding: "utf-8" } );
const lines = data.split( "\n" );
console.log( `There are ${lines.length} lines to process`);
var currentLine = 0;
function main(){
// Check if we have processed all the lines
if (currentLine == lines.length ) return;
// get the current line number, then increment it;
let lineNum = currentLine++;
// Process the line
asyncProcess( lineNum, lines[ lineNum ] )
}
function asyncProcess( lineNum, line ){
console.log( `Start processing line[${lineNum}]` );
let delayMS = getRandomInt( 100 ) * 10
setTimeout( function(){
// ------------------------------------------------
// this function represents all the async processing
// for one line.
// After async has finished, we call main again
// ------------------------------------------------
console.log( `Finished processing line[${lineNum}]` );
main();
}, delayMS )
}
function getRandomInt(max) {
return Math.floor(Math.random() * Math.floor(max));
}
// Start four parallel async processes, each of which will process one line at a time
main();
main();
main();
main();
I've been trying this for ages now and I'm not making any progress.
I found this on google https://gist.github.com/Elements-/cf063254730cd754599e
and it's running but when I put that in a function and try to use it with my code its not running.
Code:
fs.readdir(`${__dirname}/data`, (err, files) => {
if (err) return console.error(`[ERROR] ${err}`);
files.forEach(file => {
if (file.endsWith(".mp4")) {
// getVideoDuration(`${__dirname}/data/${file}`)
group = new Group(file.split(".")[0], file, null, getVideoDuration(`${__dirname}/data/${file}`), 0);
groups.push(group);
}
});
console.log(groups);
});
function getVideoDuration(video) {
var buff = new Buffer.alloc(100);
fs.open(video, 'r', function (err, fd) {
fs.read(fd, buff, 0, 100, 0, function (err, bytesRead, buffer) {
var start = buffer.indexOf(new Buffer.from('mvhd')) + 17;
var timeScale = buffer.readUInt32BE(start, 4);
var duration = buffer.readUInt32BE(start + 4, 4);
var movieLength = Math.floor(duration / timeScale);
console.log('time scale: ' + timeScale);
console.log('duration: ' + duration);
console.log('movie length: ' + movieLength + ' seconds');
return movieLength;
});
});
}
Output:
[
Group {
_name: 'vid',
_video: 'vid.mp4',
_master: null,
_maxTime: undefined,
_currentTime: 0
},
Group {
_name: 'vid2',
_video: 'vid2.mp4',
_master: null,
_maxTime: undefined,
_currentTime: 0
}
]
time scale: 153600
duration: 4636416
movie length: 30 seconds
time scale: 153600
duration: 4636416
movie length: 30 seconds
its logging the information correctly but is returning undefined
This seems like a lot of extra work for little benefit, so I'm going to refer to get-video-duration https://www.npmjs.com/package/get-video-duration which does a great job of getting durations of any video file in seconds minutes and hours
Copying the last comment of the gist you sent, I came up with this:
const fs = require("fs").promises;
class Group {
constructor(name, video, master, maxTime, currentTime) {
this._name = name;
this._video = video;
this._master = master;
this._maxTime = maxTime;
this._currentTime = currentTime;
}
setMaster(master) {
if (this._master != null) {
this._master.emit('master');
}
this._master = master;
this._master.emit('master');
}
};
const asyncForEach = async (array, callback) => {
for (let index = 0; index < array.length; index++) {
await callback(array[index], index, array);
}
};
async function loadGroups() {
const files = await fs.readdir(`${__dirname}/data`);
const groups = []
await asyncForEach(files, async file => {
if (file.endsWith(".mp4")) {
const duration = await getVideoDuration(`${__dirname}/data/${file}`);
const group = new Group(file.split(".")[0], file, null, duration, 0);
groups.push(group);
}
});
console.log(groups);
}
async function getVideoDuration(video) {
const buff = Buffer.alloc(100);
const header = Buffer.from("mvhd");
const file = await fs.open(video, "r");
const {
buffer
} = await file.read(buff, 0, 100, 0);
await file.close();
const start = buffer.indexOf(header) + 17;
const timeScale = buffer.readUInt32BE(start);
const duration = buffer.readUInt32BE(start + 4);
const audioLength = Math.floor((duration / timeScale) * 1000) / 1000;
return audioLength;
}
loadGroups();
As to why your original code wasn't working, my guess is that returning inside the callback to fs.open or fs.read doesn't return for getVideoDuration. I couldn't easily figure out a way from the fs docs to figure out how to return the value of the callback, so I just switched over to promises and async/await, which will essentially run the code synchronously. This way you can save the output of fs.open and fs.read and use them to return a value in the scope of getVideoDuration.
I've figured out a work-around for this problem.
async function test() {
const duration = await getDuration(`${__dirname}/data/vid.mp4`);
console.log(duration);
}
test();
function getDuration(file) {
return new Promise((resolve, reject) => {
exec(`ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 ${file}`, (err, stdout, stderr) => {
if (err) return console.error(err);
resolve(stdout ? stdout : stderr);
});
});
}
I only tested it on linux so I dont know if it'll work on windows
I have audio data streaming from the server to the client. It starts as a Node.js buffer (which is a Uint8Array) and is then sent to the AudiWorkletProcessor via port.postMessage(), where it is converted into a Float32Array and stored in this.data. I have spent hours trying to set the output to the audio data contained in the Float32Array. Logging the Float32Array pre-processing shows accurate data, but logging it during processing shows that it is not changing when the new message is posted. This is probably a gap in my low-level audio-programming knowledge.
When data arrives in the client, the following function is called:
process = (data) => {
this.node.port.postMessage(data)
}
As an aside, (and you can let me know) maybe I should be using parameter descriptors instead of postMessage? Anyways, here's my AudioWorkletProcessor:
class BypassProcessor extends AudioWorkletProcessor {
constructor() {
super();
this.isPlaying = true;
this.port.onmessage = this.onmessage.bind(this)
}
static get parameterDescriptors() {
return [{ // Maybe we should use parameters. This is not utilized at present.
name: 'stream',
defaultValue: 0.707
}];
}
convertBlock = (incomingData) => { // incoming data is a UInt8Array
let i, l = incomingData.length;
let outputData = new Float32Array(incomingData.length);
for (i = 0; i < l; i++) {
outputData[i] = (incomingData[i] - 128) / 128.0;
}
return outputData;
}
onmessage(event) {
const { data } = event;
let ui8 = new Uint8Array(data);
this.data = this.convertBlock(ui8)
}
process(inputs, outputs) {
const input = inputs[0];
const output = outputs[0];
if (this.data) {
for (let channel = 0; channel < output.length; ++channel) {
const inputChannel = input[channel]
const outputChannel = output[channel]
for (let i = 0; i < inputChannel.length; ++i) {
outputChannel[i] = this.data[i]
}
}
}
return true;
}
}
registerProcessor('bypass-processor', BypassProcessor);
How can I simply set the output of the AudioWorkletProcessor to the data coming through?
The AudioWorkletProcessor process only each 128 bytes, so you need to manage your own buffers to make sure that is the case for an AudioWorklet, probably by adding a FIFO.
I resolved something like this using a RingBuffer(FIFO) implemented in WebAssembly, in my case I was receiving a buffer with 160 bytes.
Look my AudioWorkletProcessor implementation
import Module from './buffer-kernel.wasmodule.js';
import { HeapAudioBuffer, RingBuffer, ALAW_TO_LINEAR } from './audio-helper.js';
class SpeakerWorkletProcessor extends AudioWorkletProcessor {
constructor(options) {
super();
this.payload = null;
this.bufferSize = options.processorOptions.bufferSize; // Getting buffer size from options
this.channelCount = options.processorOptions.channelCount;
this.inputRingBuffer = new RingBuffer(this.bufferSize, this.channelCount);
this.outputRingBuffer = new RingBuffer(this.bufferSize, this.channelCount);
this.heapInputBuffer = new HeapAudioBuffer(Module, this.bufferSize, this.channelCount);
this.heapOutputBuffer = new HeapAudioBuffer(Module, this.bufferSize, this.channelCount);
this.kernel = new Module.VariableBufferKernel(this.bufferSize);
this.port.onmessage = this.onmessage.bind(this);
}
alawToLinear(incomingData) {
const outputData = new Float32Array(incomingData.length);
for (let i = 0; i < incomingData.length; i++) {
outputData[i] = (ALAW_TO_LINEAR[incomingData[i]] * 1.0) / 32768;
}
return outputData;
}
onmessage(event) {
const { data } = event;
if (data) {
this.payload = this.alawToLinear(new Uint8Array(data)); //Receiving data from my Socket listener and in my case converting PCM alaw to linear
} else {
this.payload = null;
}
}
process(inputs, outputs) {
const output = outputs[0];
if (this.payload) {
this.inputRingBuffer.push([this.payload]); // Pushing data from my Socket
if (this.inputRingBuffer.framesAvailable >= this.bufferSize) { // if the input data size hits the buffer size, so I can "outputted"
this.inputRingBuffer.pull(this.heapInputBuffer.getChannelData());
this.kernel.process(
this.heapInputBuffer.getHeapAddress(),
this.heapOutputBuffer.getHeapAddress(),
this.channelCount,
);
this.outputRingBuffer.push(this.heapOutputBuffer.getChannelData());
}
this.outputRingBuffer.pull(output); // Retriving data from FIFO and putting our output
}
return true;
}
}
registerProcessor(`speaker-worklet-processor`, SpeakerWorkletProcessor);
Look the AudioContext and AudioWorklet instances
this.audioContext = new AudioContext({
latencyHint: 'interactive',
sampleRate: this.sampleRate,
sinkId: audioinput || "default"
});
this.audioBuffer = this.audioContext.createBuffer(1, this.audioSize, this.sampleRate);
this.audioSource = this.audioContext.createBufferSource();
this.audioSource.buffer = this.audioBuffer;
this.audioSource.loop = true;
this.audioContext.audioWorklet
.addModule('workers/speaker-worklet-processor.js')
.then(() => {
this.speakerWorklet = new AudioWorkletNode(
this.audioContext,
'speaker-worklet-processor',
{
channelCount: 1,
processorOptions: {
bufferSize: 160, //Here I'm passing the size of my output, I'm just saying to RingBuffer what size I need
channelCount: 1,
},
},
);
this.audioSource.connect(this.speakerWorklet).connect(this.audioContext.destination);
}).catch((err)=>{
console.log("Receiver ", err);
})
Look how I'm receiving and sending the data from Socket to audioWorklet
protected onMessage(e: any): void { //My Socket message listener
const { data:serverData } = e;
const socketId = e.socketId;
if (this.audioWalking && this.ws && !this.ws.isPaused() && this.ws.info.socketId === socketId) {
const buffer = arrayBufferToBuffer(serverData);
const rtp = RTPParser.parseRtpPacket(buffer);
const sharedPayload = new Uint8Array(new SharedArrayBuffer(rtp.payload.length)); //sharing javascript buffer memory between main thread and worklet thread
sharedPayload.set(rtp.payload, 0);
this.speakerWorklet.port.postMessage(sharedPayload); //Sending data to worklet
}
}
For help people I putted on Github the piece important of this solution
audio-worklet-processor-wasm-example
I followed this example, it have the all explanation how the RingBuffer works
wasm-ring-buffer