I'm trying to connect the Muse headband in chrome so i can stream live brainwave data.
I can connect and stream data fine in python using ports, but am having trouble streaming the data via web BLE.
With the code bellow, I can connect to the device and startNotifications(), but the event listener for "characteristicvaluechanged" is not being triggered. I've also tried connecting with other bluetooth devices, and no values are read so I don't think this is a Muse problem.
<h1>Test HTML</h1>
<button id="read" onclick="onButtonClick(event)">Connect with the BLE device</button>
<script>
function onButtonClick(event){
connectBLE()
}
async function connectBLE() {
let serviceUuid = '5052494d-2dab-0341-6972-6f6861424c45'
let characteristicUuid = '43484152-2dab-3141-6972-6f6861424c45'
try {
console.log('Requesting Bluetooth Device...');
const device = await navigator.bluetooth.requestDevice({
filters: [{services: [serviceUuid]}]});
console.log('Connecting to GATT Server...');
const server = await device.gatt.connect();
console.log('Getting Service...');
const service = await server.getPrimaryService(serviceUuid);
console.log('Getting Characteristic...');
myCharacteristic = await service.getCharacteristic(characteristicUuid);
await myCharacteristic.startNotifications();
console.log(myCharacteristic)
console.log('> Notifications started');
myCharacteristic.addEventListener('characteristicvaluechanged',
handleNotifications);
} catch(error) {
console.log('Argh! ' + error);
}
}
function handleNotifications(event) {
console.log('YEAH!')
let value = event.target.value;
let a = [];
// Convert raw data bytes to hex values just for the sake of showing something.
// In the "real" world, you'd use data.getUint8, data.getUint16 or even
// TextDecoder to process raw data bytes.
for (let i = 0; i < value.byteLength; i++) {
a.push('0x' + ('00' + value.getUint8(i).toString(16)).slice(-2));
}
console.log('> ' + a.join(' '));
}
</script>
Anyone ever had a similar issue? I can't find any solutions online so this is kinda my last resort.
PS: I'm running Ubuntu 18.04 and Chromium 83.0.4103.61
first is length
second is command - to start streaming it is letter "d" ascii
third is "LF" - dunno if in JS it is not CRLF or whatever - you can use 0x0a...
example (which actually starts sending the raw data you've subcribed to) in swift:
let array: [UInt8] = [2, 0x64, 0x0a]
let data: Data = Data(array)
peripheral.writeValue(data, for: characteristic, type: .withoutResponse)
Related
I am attempting to use a Zebra DS9208 scanner to capture barcode data into a web page. For some reason, when I first select the serial port and connect to the scanner, the very first scan causes the barcode reader to crash (it locks into read mode, refuses to read additional scans, then disconnects from the computer and reboots itself). When I establish the connection a second time, the scanner continuously reads scans without issues. Anyone able to spot anything in my code that could cause this?
Connect Script:
async function connect() {
// - Request a port and open a connection.
port = await navigator.serial.requestPort();
// - Wait for the port to open.
await port.open({ baudRate: 9600 });
// Send a bunch of Bell characters so we can hear that the scanner understands us.
await quadBeep();
// Read the stream let textDecoder = new TextDecoderStream();
inputDone = port.readable.pipeTo(textDecoder.writable);
reader = textDecoder.readable.getReader();
readScans();
}
Beep Script (used to make the scanner beep by sending a BELL character)
function quadBeep() {
console.log('Quad Beep Requested');
//Write to output stream
const writer = port.writable.getWriter();
const data = new Uint8Array([07, 07, 07, 07]);
writer.write(data);
//allow the serial port to be closed later
writer.releaseLock();
return;
}
Read Loop to source data from the scanner:
`async function readScans() {
//Listen to data coming from the serial device
async function readScans() {
while (true) {
try {
const { value, done } = await reader.read();
await saveScan(value); //process the scan
console.log('readScans Barcode: ' + value);
document.getElementById('scan').innerHTML += value;
if (done) {
//Allow the serial port to be closed later.
console.log('readScans done value: ' + done);
reader.releaseLock();
break;
}
} catch(error) {
console.log('readScans Error: ' + error);
break;
}
}
}
Save Function, to write the data (eventually will submit the barcode via AJAX)
function saveScan(barcode) {
var session = document.getElementById('sessionID').getAttribute('data-value');
if (barcode == previousBarcode) {
//duplicate scan.
console.log('saveScan Duplicate');
return;
} else {
//Submit the scan
previousBarcode = barcode; //store the barcode so it doesn't get rescanned
console.log('saveScan Previous set to: ' + barcode);
//future AJAX FUNCTION GOES HERE
return;
}
}
I have created a real time voice chat application for a game I am making. I got it to work completely fine using audiocontext.createScriptProcessor() method.
Here's the code, I left out parts that weren't relevant
//establish websocket connection
const audioData = []
//websocket connection.onMessage (data) =>
audioData.push(decodeBase64(data)) //push audio data coming from another player into array
//on get user media (stream) =>
const audioCtx = new AudioContext({latencyHint: "interactive", sampleRate: 22050,})
const inputNode = audioCtx.createMediaStreamSource(stream)
var processor = audioCtx.createScriptProcessor(2048, 1, 1);
var outputNode = audioCtx.destination
inputNode.connect(tunerNode)
processor.connect(outputNode)
processor.onaudioprocess = function (e) {
var input = e.inputBuffer.getChannelData(0);
webSocketSend(input) //send microphone input to other sockets via a function set up in a different file, all it does is base 64 encode then send.
//if there is data from the server, play it, else, play nothing
var output
if(audioData.length > 0){
output = audioData[0]
audioData.splice(0,1)
}else output = new Array(2048).fill(0)
};
the only issue is that the createScriptProccessor() method is deprecated. As recommended, I attempted to do this using Audio Worklet Nodes. However I quickly ran into a problem. I can't access the user's microphone input, or set the output from the main file where the WebSocket connection is.
Here is my code for main.js:
document.getElementById('btn').onclick = () => {createVoiceChatSession()}
//establish websocket connection
const audioData = []
//webSocket connection.onMessage (data) =>
audioData.push(data) //how do I get this data to the worklet Node???
var voiceChatContext
function createVoiceChatSession(){
voiceChatContext = new AudioContext()
navigator.mediaDevices.getUserMedia({audio: true}).then( async stream => {
await voiceChatContext.audioWorklet.addModule('module.js')
const microphone = voiceChatContext.createMediaStreamSource(stream)
const processor = new AudioWorkletNode(voiceChatContext, 'processor')
microphone.connect(processor).connect(voiceChatContext.destination)
}).catch(err => console.log(err))
}
Here is my code for module.js:
class processor extends AudioWorkletProcessor {
constructor() {
super()
}
//copies the input to the output
process(inputList, outputList) { // how do I get the input list data (the data from my microphone) to the main file so I can send it via websocket ???
for(var i = 0; i < inputList[0][0].length; i++){
outputList[0][0][i] = inputList[0][0][i]
outputList[0][1][i] = inputList[0][1][i]
}
return true;
}
}
registerProcessor("processor", processor);
So I can record and process the input, but I can't send input via WebSocket or pass in data that is coming from the server to the worklet node because I can't access the input list or output list from the main file where the WebSocket connection is. Does anyone know a way to work around this? Or is there a better solution that doesn't use audio worklet nodes?
Thank you to all who can help!
I figured it out, all I needed to do was use the port.onmessage method to exchange data between the worklet and the main file.
processor.port.onmessage = (e) => {//do something with e.data}
I am trying to get my laptop's speaker level shown in my application. I am new to WebRTC and Web Audio API, so just wanted to confirm about the possibility of a feature. The application is an electron application and has a calling feature, so when the user at the other end of the call speaks, the application should display a level of output which varies accordingly to the sound. I have tried using WebRTC and Web Audio API, and even seen a sample. I am able to log values but that changes when I speak in the microphone, while I need only the values of speaker not the microphone.
export class OutputLevelsComponent implements OnInit {
constructor() { }
ngOnInit(): void {
this.getAudioLevel()
}
getAudioLevel() {
try {
navigator.mediaDevices.enumerateDevices().then(devices => {
console.log("device:", devices);
let constraints = {
audio : {
deviceId: devices[3].deviceId
}
}
navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
console.log("stream test: ", stream);
this.handleSuccess(stream)
});
});
} catch(e) {
console.log("error getting media devices: ", e);
}
}
handleSuccess(stream: any) {
console.log("stream: ", stream);
var context = new AudioContext();
var analyser = context.createScriptProcessor(1024, 1, 1);
var source = context.createMediaStreamSource(stream);
source.connect(analyser);
// source.connect(context.destination);
analyser.connect(context.destination);
opacify();
function opacify() {
analyser.onaudioprocess = function(e) {
// no need to get the output buffer anymore
var int = e.inputBuffer.getChannelData(0);
var max = 0;
for (var i = 0; i < int.length; i++) {
max = int[i] > max ? int[i] : max;
}
if (max > 0.01) {
console.log("max: ", max);
}
}
}
}
}
I have tried the above code, where I use enumerateDevices() and getUserMedia() which will give a set of devices, for demo purposes I am taking the last device which has 'audiooutput' as value for kind property and accessing stream of the device.
Please let me know if this is even possible with Web Audio API. If not, is there any other tool that can help me implement this feature?
Thanks in advance.
You would need to use your handleSuccess() function with the stream that you get from the remote end. That stream usually gets exposed as part of the track event.
The problem is likely linked to the machine you are running. On macOS, there is no way to capture system audio output from Browser APIs as it requires a signed kernel extension. Potential workarounds are using Blackhole for Sunflower. On windows, the code should work fine though.
I am struggling through learning JQuery/Javascript and have a web application using the chrome "experimental" web serial API. When I enter a command and get a response back, this string is broken into 2 pieces in a random place, usually in the first third:
<p0><iDCC-EX V-0.2.1 / MEGA / STANDARD_MOTOR_SHIELD G-9db6d36>
All the other return messages are shorter and also wrapped in "<" and ">" brackets.
In the code below. The log window only ever shows the second chunk, even in the "ChunkTransformer() routine that simultaneously displays it properly in the devtools console log.
How can I get all my return messages to appear as one string? It is ok if the chunks are split as separate return values by the brackets as long as they display in the log. I think the <p0> is not displaying because the log window thinks it is a special character. It would not even display here until I wrapped in in a code tag. So I think I have at least two issues.
async function connectServer() {
try{
port = await navigator.serial.requestPort(); // prompt user to select device connected to a com port
await port.open({ baudRate: 115200 }); // open the port at the proper supported baud rate
// create a text encoder output stream and pipe the stream to port.writeable
const encoder = new TextEncoderStream();
outputDone = encoder.readable.pipeTo(port.writable);
outputStream = encoder.writable;
// send a CTRL-C and turn off the echo
writeToStream('\x03', 'echo(false);');
let decoder = new TextDecoderStream();
inputDone = port.readable.pipeTo(decoder.writable);
inputStream = decoder.readable
// test why only getting the second chunk in the log
.pipeThrough(new TransformStream(new ChunkTransformer()));
// get a reader and start the non-blocking asynchronous read loop to read data from the stream.
reader = inputStream.getReader();
readLoop();
return true;
} catch (err) {
console.log("User didn't select a port to connect to")
return false;
}
}
async function readLoop() {
while (true) {
const { value, done } = await reader.read();
if (value) {
displayLog(value);
}
if (done) {
console.log('[readLoop] DONE'+done.toString());
displayLog('[readLoop] DONE'+done.toString());
reader.releaseLock();
break;
}
}
}
class ChunkTransformer {
transform(chunk, controller) {
displayLog(chunk.toString()); // only shows last chunk!
console.log('dumping the raw chunk', chunk); // shows all chunks
controller.enqueue(chunk);
}
}
function displayLog(data){
$("#log-box").append("<br>"+data+"<br>");
$("#log-box").animate({scrollTop: $("#log-box").prop("scrollHeight"), duration: 1}, "fast");
}
First Step:
Modify the displayLog() function in one of the following ways
With Animate:
function displayLog(data){
$("#log-box").append("<br>"+data+"<br>");
$("#log-box").animate({scrollTop: $("#log-box").prop("scrollHeight")}, "fast");
}
Without Animate:
function displayLog(data){
$("#log-box").append("<br>"+data+"<br>");
$("#log-box").scrollTop( $("#log-box").prop("scrollHeight"));
}
OR Just for your understanding:
function displayLog(data){
$("#log-box").append("<br>"+data+"<br>");
scrollHeight = $("#log-box").prop("scrollHeight");
$("#log-box").scrollTop(scrollHeight);
}
I am running Node.js server with express. I'd also like the server to accept IceCast audio stream.
I could use another port, sure, but not all hostings (like Heroku) allow that. Ice cast's stream request looks like this:
SOURCE /mountpoint ICE/1.0\n
content-type: audio/mpeg\n
Authorization: Basic USER+PASS base64encoded\n
ice-name: This is my server name\n
ice-url: http://www.oddsock.org\n
ice-genre: Rock\n
ice-bitrate: 128\n
ice-private: 0\n
ice-public: 1\n
ice-description: This is my server description\n
ice-audio-info: ice-samplerate=44100;ice-bitrate=128;ice-channels=2\n
\n
After that, audio stream follows. I wrote a separate server that handles this on another port and it works fine.
var headers = "";
var headersEnd = false;
var mp3;
const audioServer = net.createServer(function (socket) {
if (mp3) {
socket.write("HTTP/1.0 403 Client already connected\r\n\r\n");
socket.end();
socket.on("error", (e) => {});
return;
}
mp3 = fs.createWriteStream("test.mp3", { encoding: null, flags: "a" });
socket.on("data", (data) => {
if (!headersEnd) {
var tmp = "";
for (let i = 0, l = data.byteLength; i < l; ++i) {
const item = data[i];
if (item == CR_NUMBER)
continue;
const character = String.fromCharCode(item);
tmp += character;
headers += character;
if (headers.endsWith("\n\n")) {
headersEnd = true;
console.log("ICE CAST HEADERS: \n", headers.replace(/\n/g, "\\n\n").replace(/\r/g, "\\r"));
break;
}
}
}
else {
mp3.write(data);
}
});
socket.on("close", () => {
console.log("ICE CAST: END");
if (mp3) {
mp3.close();
mp3 = null;
}
});
socket.on("error", (e) => {
console.log("ICE CAST: ERROR" + e.message);
socket.end();
});
});
audioServer.listen(11666);
What I'd like is to somehow bootstrap node's HTTP server so that I can stream over the same port.
I tried to access the req connection info, that doesn't really work, because the server does not even let the SOURCE /mountpoint ICE/1.0 through.
const server = http.createServer(function (req, res) {
/// does not happen, server closes the connection from icecast
if (handleAudioStream(req, res)) {
return;
}
else {
return expressApp(req, res);
}
});
So I'd need to go deeper. I tried to inspect the net and http code, but didn't fund anything useful.
How can I do this? I really need to use same port, and since icecast DOES send the HTTP-like headers, it should be possible.
This isn't trivial, but possible. You can do some duck punching/monkey patching. See this answer: https://stackoverflow.com/a/24298059/362536
Also, it may be possible to get official support some day, but we're a ways off from that. The first blocker was the non-standard SOURCE method. I sponsored a bounty on that and Ben Noordhuis was kind enough to implement last week: https://github.com/nodejs/http-parser/issues/405 It should land in Node.js eventually.
The next issue is the ICE/1.0. I've opened an issue for that here: https://github.com/nodejs/http-parser/issues/410 There hasn't been any objection to adding it to the parser yet, but if you want to add a pull request, that might help a chance of approval.
You'll find other compatibility issues as well as you continue down this road, but all I've hit I've been able to overcome with various solutions. The trick is, maintaining strict compatibility with the Node.js core as it is updated.