I want a small short beep noise to get the user's attention occasionally in my Vaadin Framework 8 app.
Example usage: Beep when password entered into a user-authentication page fails to authenticate successfully. User clicks "Sign-in" button, beep happens immediately after the password check flunks.
Another example: When an automatically updated chart trends upward significantly, play 3 beeps in ascending frequency (notes). If chart trends downward, play descending frequency (notes).
I would like to avoid downloading a sound file to the web client, unless that has clear advantages. Seems simpler, lightweight, and hopefully higher performance to use JavaScript or HTML5 to generate a beep on the client itself locally.
I found what looks like a modern JavaScript solution in this Answer by Houshalter. And this sibling Answer by CaptainWiz contains a live demo that seems to work well.
Is there a way to trigger the client-side execution of this JavaScript code from my server-side Vaadin app’s Java code?
Will it be performant? I mean the beep needs to happen very quickly in the context of the user's current action, without an annoying/confusing time delay.
Alternatively, this Answer talks about HTML5 having a new Audio objects feature for playing a sound file. Might that have advantages over invoking a chunk of JavaScript code for sound synthesis?
And another alternative: WebAudio API by W3C as shown in this Answer.
One way to achieve this via an AbstractJavascriptComponent in Vaadin. This
gives a rather direct approach to write Javascript components or make JS libs
accessible without spending too much time on getting a grasp on GWT etc.
The callFunction from an AbstractJavascriptComponent calls directly into
the JS-code in the browser.
Create a Beeper class:
package app.ui
import com.vaadin.annotations.JavaScript
import com.vaadin.ui.AbstractJavaScriptComponent
#JavaScript("js/beeper_connector.js")
class Beeper extends AbstractJavaScriptComponent {
void beep(Integer duration, Integer frequency) {
callFunction('beep', duration, frequency)
}
}
Note the annotation and also create that file in the same package (app.ui),
on that path with that name (js/beeper_connector.js). The file needs at
least to contain a "class" with the name app_ui_Beeper (FQN of the Java class
with the dots replaced by underscores) Add your beep function with params of
types, that can be transported via "JSON":
window.app_ui_Beeper = function() {
var audioCtx = new (window.AudioContext || window.webkitAudioContext || window.audioContext);
this.beep = function(duration, frequency) {
var oscillator = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();
oscillator.connect(gainNode);
gainNode.connect(audioCtx.destination);
if (frequency){oscillator.frequency.value = frequency;}
oscillator.start();
setTimeout(function(){oscillator.stop()}, (duration ? duration : 500));
};
};
This code is lifted from the answer referenced by OP: How do I make Javascript beep?
Now make sure, that you add a Beeper instance somewhere in your main scene
graph in the UI, so it can be accessed from everywhere.
A working example can be found here: https://github.com/christoph-frick/vaadin-webaudio-beep
Related
I have read and implemented in the past something similar to what Chris Wilson described in his article "A Tale of Two Clocks": https://web.dev/audio-scheduling/
I recently found WAAClock which in theory implements the same principle:
https://github.com/sebpiq/WAAClock
I'm working on a MIDI Web App and I want to send MIDI Clock Messages, which requires precise scheduling. I wrote this post in WebMidiJS forum (an amazing lib I'm using in my Project):
https://github.com/djipco/webmidi/discussions/333
Essentially this is my code:
const PULSES_PER_QUARTER_NOTE = 24;
const BPM_ONE_SECOND = 60;
let context = new AudioContext();
let clock = new WAAClock(context);
const calculateClockDelay = bpm => BPM_ONE_SECOND / bpm / PULSES_PER_QUARTER_NOTE;
const startMidiClock = bpm => {
clock.start();
clock.callbackAtTime(function () {
WebMidi.outputs.forEach(outputPort => outputPort.sendClock({}));
}, 0).repeat(calculateClockDelay(bpm));`
}
const stopMidiClock = () => clock.stop();
As described in that posts, I CANNOT get the event to happen with high precision. I see the BPM Meter to slightly DRIFT. I tried sending MIDI Clock from a DAW and the timing is perfect.
I'm using ZERO as tolerance in the clock.callbackAtTime function.
Why Do I see this drift/ slight scheduling error?
Is there any other way to schedule a precise repeating MIDI event with WAAClock?
Is WAAClock capable of precise scheduling as with Chris Wilson's technique?
Thanks a lot!
Danny Bullo
Eh. I'm not deeply familiar with that library - but at a quick glance, I am deeply suspicious that it can do what it claims to. From its code, this snippet makes me VERY suspicious:
this._clockNode.onaudioprocess = function () {
setTimeout(function() { self._tick() }, 0)
}
If I understand it, this is trying to use the scriptprocessornode to get a high-stability, low-latency clock. That's not, unfortunately, something that scriptprocessornode can do. You COULD do something closer to this with audioworklets, except you wouldn't be able to call back out to the main thread to fire the MIDI calls.
I'm not sure you've fully grasped how to apply the TOTC approach to MIDI - but the key is that Web MIDI has a scheduler built in, too: instead of trying to call your Javascript code and outputPort.sendClock() at PRECISELY the right time, you need to schedule ahead a call with (essentially) outputPort.sendClock( time ) - that is, some small amount of time before the clock message needs to be sent, you need to call the MIDIOutput.send() with the timestamp parameter set - a schedule time for precisely when it needs to be sent.
This is gone over in more detail (for audio, though, not MIDI) in https://web.dev/audio-scheduling/#obtaining-rock-solid-timing-by-looking-ahead.
I've been working on a Tone.js synthesizer project for some time now.
For the record I will include the links:
Repo
Deployment
(it is still under development as I am stuck with this issue)
I have encountered a serious issue that I couldn't manage to solve. I use the PolySynth with an FMSynth, but I've had the same issue with all the other synth types that I tried.
Whenever I am trying to tweak the parameters of the synth live (ex. detune, modulation index, amplitude envelope etc.) I get unexpected behaviour:
Most of the times, when I change the value of a parameter, it works at once, thus the sound gets modifidied according to the change I made, but the sound is then stuck to the first modified value, even if I keep changing the value of the parameter.
Then sometimes I get the modified sound every second time I play a note on the synth. One time I get the modified sound and then next time the original sound without any modification, then the modified sound again and so on.
Somestimes it works, but I am still stuck on the first modification.
Sometimes it works randomly, after playing some notes first.
Sometimes it works at once, but then some specific notes produce the unmodified original sound, regardless of my modification (and still the synth stops responding to any further parameter changes).
This is happening with every parameter but volume: volume works as intended every time.
Let's use Modulation index as an example (the same happens with detune, harmonicity and attack - those are the parameters I've implemented for the time being). Originally, I use NexusUI components, but for this I wil be using a regular HTML slider (to prove that NexusUI is not the problem). It can be found in the deployment website I provided and in the repo. This is my code:
In the main JavaScript file, I create the synth and send it to destination:
const synth = new Tone.PolySynth(Tone.FMSynth).toDestination();
In the HTML file I create a simple slider:
<input type="range" min="0" max="300" value="0" class="slider" id="mod-index">
And everytime the dial moves I change the modulation index value accordingly:
let modulationIndexSlider = document.getElementById("mod-index");
modulationIndexSlider.oninput = function() {
synth.options.modulationIndex = this.value
}
As you can see I am setting the new modulation index value using:
synth.options.modulationIndex = this.value
I follow the exact same approach for the other parameters,ex.
synth.options.harmonicity = this.value
synth.options.envelope.attack = this.value
and so on.
I am using Tone.js 14.8.37 using CDN, Chrome 98-99 on Ubuntu.
Thanks a lot to anyone who might ofer some help :-)
P.S. I have also opened an issue upon this matter, that can be found here
https://github.com/Tonejs/Tone.js/issues/1045
With the Tone.js instruments, use the .set method for changing properties. You can change multiple properties at the same time, like this:
synth.set({
harmonicity: 10,
envelope: {
attack: 0.001,
},
});
In the case of volume, it is a Param, so it can be changed directly via volume.value.
I'm new to web audio API, I used a demo and managed to separate the two channels of PC audio input source and drive a double demo analyzer view.
I've realized that most functions work by handling automatically the whole stream of audio data, but my question is this:
how can I get a single sample from each channel of the stereo input source when I want it (programmatically) and put it on a variable, then another one at a different time?
var input = audioContext.createMediaStreamSource(stream);
var options = {
numberOfOutputs : 2
}
var splitter = new ChannelSplitterNode(audioContext, options);
input.connect( splitter );
splitter.connect(analyser1, 0, 0);
splitter.connect(analyser2, 1, 0);
If you're not too concerned about latency, the MediaRecorder interface can be used to capture raw audio data from a streaming input source (like the one returned by AudioContext.createMediaStreamSource). Use the dataavailable event or MediaRecorder.requestData method to access the raw data as a Blob. There's a fairly straightforward example of this in samdutton/simpl on GitHub. (Also there's a related Q&A on StackOverflow here.)
More generally if you can get the audio you want to analyze into a AudioBuffer, the AudioBuffer.getChannelData method can be used to extract the raw PCM sample data associated with an audio channel.
However if you're doing this in "real-time" - i.e., if you're trying to process the live input audio for "live" playback or visualization - then you'll probably want to look at the AudioWorklet API. Specifically, you'll want to create an AudioWorkletProcessor that examines the individual samples as part of the process handler.
E.g., something like this:
// For this example we'll compute the average level (PCM value) for the frames
// in the audio sample we're processing.
class ExampleAudioWorkletProcessor extends AudioWorkletProcessor {
process (inputs, outputs, parameters) {
// the running total
sum = 0
// grab samples from the first channel of the first input connected to this node
pcmData = inputs[0][0]
// process each individual sample (frame)
for (i = 0; i < pcmData.length; i++ ) {
sum += pcmData[i]
}
// write something to the log just to show it's working at all
console.log("AVG:", (sum / pcmData.length).toFixed(5))
// be sure to return `true` to keep the worklet running
return true
}
}
but that's neither bullet-proof nor particularly efficient code. (Notably you don't really want to use console.log here. You'll probably either (a) write something to the outputs array to send audio data to the next AudioNode in the chain or (b) use postMessage to send data back to the main (non-audio) thread.)
Note that since the AudioWorkletProcessor is executing within the "audio thread" rather than the main "UI thread" like the rest of your code, there are some hoops you must jump through to set up the worklet and communicate with the primary execution context. That's probably outside the scope of what's reasonable to describe here, but here is a complete example from MDN, and there are a large number of tutorials that can walk you through the steps if you search for keywords like "AudioWorkletProcessor", "AudioWorkletNode" or just "AudioWorklet". Here's one example.
I'm using MediaCapture in javascript to capture my camera.
I have a Camera class with an initCamera function. The problem is, if I try to re-init my camera in a short time period I will get this error: Hardware MFT failed to start streaming due to lack of hardware resources.
Now I get that this means my camera is still in use. The thing I want to know is:
How do I properly close my camera
How do I check if my camera is in use or unavailable
Here is a piece of code:
function Camera() {
var that = this;
this.mediaCaptureElement = null;
this.initCamera = function() {
if (!that.mediaCaptureElement) {
that.mediaCaptureElement = new Windows.Media.Capture.MediaCapture();
that.mediaCaptureElement.addEventListener("failed", function (e) {
console.warn("The camera has stopped working");
}
that.mediaCaptureElement.initializeAsync().then(function() {
that.mediaCaptureElement.videoDeviceController.primaryUse = Windows.Media.Devices.CaptureUse.photo;
that.getCameraResolution();
that.orientationChanged();
that.startCamera();
});
}
};
The way I re-open my camera currently is by overwriting the camera instance with a new instance of the Camera class.
Thanks in advance.
I had the same problem using MediaCapture in C#.
I had to call Dispose() after StopPreviewAsync in order to correct it :
await cameraControler.MediaCaptureInstance.StopPreviewAsync(); cameraControler.MediaCaptureInstance.Dispose();
Have you seen the Camera Starter Kit UWP sample? It comes in a JS flavor too!
If you want to be able to reliably access the camera shortly after being done using it, you need to make sure you're cleaning up all resources properly. From the code that you've shared, it seems like you're letting the system take care of this, which means your app might be coming back before the system is done closing out all resources.
You should take care of:
Stop any recordings that may be in progress
Stop the preview
Close the MediaCapture
Have a look at the cleanupCameraAsync() method from the sample I linked above for an example on how to implement this.
We have a long piece of video, up to 1 hour long.
We want to show users small 30 second chunks of this video.
It's imperative that the video does not stutter at any point.
The user can't then jump around the rest of the video, they only see the 30 second chunk.
An example would be say, a football match, the whole match is on video but clicking a button in another page would load up the full video and play just a goal.
Is this possible with HTML5 Video?
Would it have anything to do with TimeRanges?
Does the video have to served over a pure streaming protocol?
Can we buffer the full 30 second chunk before playing it?
The goal is to cut down on the workflow required to cut out all the little clips (and the time transcoding these to all the different HTML 5 video formats), we can just throw up a trans-coded piece of footage and send the user to a section of that footage.
Your thoughts and input are most welcome, thanks!
At this point in time HTML5 videos are a real PITA -- we have no real API to control the browser buffering, hence they tend to stutter on slower connections, as the browsers try to buffer intelligently, but usually do quite the opposite.
Additionally, if you only want your users to view a particular 30 second chunk of a video (I assume that would be your way of forcing users to registers to view the full videos), HTML5 is not the right choice -- it would be incredibly simple to abuse your system.
What you really need in this case is a decent Flash Player and a Media Server in the backend -- this is when you have full control.
You could do some of this, but then you'd be subject the the browser's own buffering. (You also can't stop it from buffering beyond X sec)
Best put, you could easily have a custom seek control to restrict the ranges and stop the video when is hits the 30 second chunk.
Also, buffering is not something you can control other the tell the browser not to do it. The rest is automatic and support to force a full buffer has been removed from specs.
Anyways, just letting you know this is terrible practice and it could be done but you'll be potentially running into many issues. You could always use a service like Zencoder to help handle transcoding too. Another alternative would be to have ffmpeg or other software on the server to handle clipping and transcoding.
You can set the time using javascript (the video's currentTime property).
In case you want a custom seekbar you can do something like this:
<input type="range" step="any" id="seekbar">
var seekbar = document.getElementById('seekbar');
function setupSeekbar() {
seekbar.max = video.duration;
}
video.ondurationchange = setupSeekbar;
function seekVideo() {
video.currentTime = seekbar.value;
}
function updateUI() {
seekbar.value = video.currentTime;
}
seekbar.onchange = seekVideo;
video.ontimeupdate = updateUI;
function setupSeekbar() {
seekbar.min = video.startTime;
seekbar.max = video.startTime + video.duration;
}
If the video is streaming you will need to "calculate" the "end" time.
var lastBuffered = video.buffered.end(video.buffered.length-1);
function updateUI() {
var lastBuffered = video.buffered.end(video.buffered.length-1);
seekbar.min = video.startTime;
seekbar.max = lastBuffered;
seekbar.value = video.currentTime;
}