Tone JS - Transport.stop(); does not work with scheduled events - javascript

I am using Tone JS for a project and I am using Transport.scheduleOnce to schedule events with the Sampler. Here is what I have so far, also here is a fiddle of it (you may need to click run a couple times to hear the audio come through when fiddle initially loads)
My code:
const sound = 'https://archive.org/download/testmp3testfile/mpthreetest.mp3';
let samplerBuffer;
const sampler = new Promise((resolve, reject) => {
samplerBuffer = new Tone.Sampler(
{
A1: sound
},
{
onload: () => {
resolve()
}}
).toMaster();
})
sampler.then(() => {
Tone.Transport.scheduleOnce(() => {
samplerBuffer.triggerAttack(`A1`, `0:0`)
});
Tone.Transport.start();
setTimeout(() => {
console.log('Now should be stopping');
Tone.Transport.stop();
},1000)
})
I am trying to stop the audio from playing after 1 second using the Transport.stop() method however it does not seem to work. I think I have followed the docs as I should so where am I going wrong?

Tone.Transport is triggering your sample.
What you want to do is either use the Tone.Player if you only want to play sounds like a "jukebox".
If you really need a sampler, then should to look into Envelopes because the Sampler uses one.
In short: The Tone.Transport like the maestro in a concert. Transport is only setting the time (only the BPM not the playback speed). Tone.Transport.start() will trigger all registered instruments (in your case the Sampler) to start doing whatever you programmed them to do. If you want to stop the Sampler from playing in this mode. You can do samplerBuffer.releaseAll()
const sound = 'https://archive.org/download/testmp3testfile/mpthreetest.mp3';
let samplerBuffer;
const sampler = new Promise((resolve, reject) => {
samplerBuffer = new Tone.Sampler(
{
A1: sound
},
{
onload: () => {
resolve()
}}
).toMaster();
})
sampler.then(() => {
Tone.Transport.scheduleOnce(() => {
samplerBuffer.triggerAttack(`A1`, `0:0`)
});
Tone.Transport.start();
setTimeout(function() {
console.log('Now should be stopping');
samplerBuffer.releaseAll();
// samplerBuffer.disconnect();
},1000)
})
https://jsfiddle.net/9zns7jym/6/

Related

Overriding navigator.mediaDevices.getDisplayMedia for Screenshare inside electron

I came accross capture screen with electron when rendering a web site when I needed a solution for enabling screenshare inside my electron app;
however the desktopCapturer is always undefined, on my side and the only way I can access;
sources is inside the main process;
I would like to know if there is a way to have sources define when I do something like this
let all_sources = undefined
ipcRenderer.on('SET_SOURCES', (ev, sources) => {
all_sources = sources
console.log("The sources are : ", all_sources)
})
const wait_function = function() {
return new Promise(resolve => {
setTimeout(function() {
resolve(all_sources);
}, 4000);
});
};
contextBridge.exposeInMainWorld("myCustomGetDisplayMedia", async () => {
await ipcRenderer.send('GET_SOURCES')
await wait_function(); // want to make sure all_sources is defined
const selectedSource = all_sources[0]; // take the Entire Screen just for testing purposes
return selectedSource;
});
this is inside the preload js script.
thanks

Telegram bot which repeats send message with some time?

ClearInterval don't work or work but I make a mistake. I don't know but when I use /stop it continue write 'Sending'. How to resolve such problem.
bot.hears(/\/send|\/stop/, ctx=> {
let sending = setInterval(() => {
if (/\/send/.test(ctx.update.message.text)) {
ctx.reply('Sending:');
} else if (/\/stop/.test(ctx.update.message.text)){
ctx.reply('stopping!');
clearInterval(sending);
}
}, 10000);
});
The main problem is you're creating new intervals every time you send /send or /stop. So, your intervals get created multiple times generating multiple intervals in parallel.
Something like this should work:
let sendInterval;
bot.hears(/\/send|\/stop/, ctx => {
if (sendInterval) {
clearInterval(sendInterval);
}
if (/\/send/.test(ctx.update.message.text)) {
sendInterval = setInterval(() => {
ctx.reply('Sending');
}, 10000);
} else if (/\/stop/.test(ctx.update.message.text)) {
ctx.reply('stopping!');
}
});

How to play multiple audio sequentially with ionic Media plugin

I am trying to play multiple audio files with ionic media plugin : https://ionicframework.com/docs/native/media. but I am having a hard time making it work as a playlist without using a timeout function.
Here is what I have tried out
playOne(track: AudioFile): Promise<any> {
return new Promise(async resolve =>{
const AudFile = await this.media.create(this.file.externalDataDirectory+track.trackUrl);
await resolve(AudFile.play())
});
}
Then to play All , I have this :
async playAll(tracks: AudioFile[]): Promise<any>{
let player = (acc, track:AudioFile) => acc.then(() =>
this.playOne(track)
);
tracks.reduce(player, Promise.resolve());
}
This way they are all playing at the same time.
But If The PlayOne method is wrapped in a timeout function, the interval of the milli seconds set on the timeout exists among the play list, but one does not necessarily finish before the other starts and sometimes it waits for a long time before the subsequent file is plaid.
The timeout implementation looks like this :
playOne(track: AudioFile): Promise<any> {
return new Promise(async resolve =>{
setTimeout(async ()=>{
const AudFile = await this.media.create(this.file.externalDataDirectory+track.trackUrl);
await resolve(AudFile.play())
},3000)
});
}
Digging into ionic wrapper of the plugin, the create method looks like this :
/**
* Open a media file
* #param src {string} A URI containing the audio content.
* #return {MediaObject}
*/
Media.prototype.create = function (src) {
var instance;
if (checkAvailability(Media.getPluginRef(), null, Media.getPluginName()) ===
true) {
// Creates a new media object
instance = new (Media.getPlugin())(src);
}
return new MediaObject(instance);
};
Media.pluginName = "Media";
Media.repo = "https://github.com/apache/cordova-plugin-media";
Media.plugin = "cordova-plugin-media";
Media.pluginRef = "Media";
Media.platforms = ["Android", "Browser", "iOS", "Windows"];
Media = __decorate([
Injectable()
], Media);
return Media;
}(IonicNativePlugin));
Any suggestion would be appreciated
You may get it working by looping over your tracks and await playOne on each track.
async playAll(tracks: AudioFile[]): Promise<any> {
for (const track of tracks) {
await this.playOne(track);
}
}
If I'm not mistaking play function doesn't block until playing the audio file is finished. It doesn't return a promise either. A work around would be to use a seTimeout for the duration of the track
playOne(track: AudioFile): Promise<any> {
return new Promise((resolve, reject) => {
const audFile = await this.media.create(this.file.externalDataDirectory+track.trackUrl);
const duration = audFile.getDuration(); // duration in seconds
AudFile.play();
setTimeout(() => {
resolve();
},
duration * 1000 // setTimeout expect milliseconds
);
});
}
I eventually got it to work with a recursive function. This works as expected.
PlayAllList(i,tracks: AudioFile[]){
var self = this;
this.Audiofile = this.media.create(this.file.externalDataDirectory+tracks[i].trackUrl);
this.Audiofile.play()
this.Audiofile.onSuccess.subscribe(() => {
if ((i + 1) == tracks.length) {
// do nothing
} else {
self.PlayAllList(i + 1, tracks)
}
})
}
Then
this.PlayAllList(0,tracks)
If there is any improvement on this, I will appreciate.
I think you will be better of with the Web Audio API. I have used it before, and the possibilities are endless.
Apparently it can be used in Ionic without issues:
https://www.airpair.com/ionic-framework/posts/using-web-audio-api-for-precision-audio-in-ionic
I have used it on http://chordoracle.com to play multiple audio samples at the same time (up to 6 simultaneous samples for each string of the guitar). In this case i also alter their pitch to get different notes.
In order to play multiple samples, you just need to create multiple bufferSources:
https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext/createBufferSource
Some links to get you started:
https://www.w3.org/TR/webaudio/
https://medium.com/better-programming/all-you-need-to-know-about-the-web-audio-api-3df170559378

Play audio with React

Whatever I do, I get an error message while trying to playing a sound:
Uncaught (in promise) DOMException.
After searching on Google I found that it should appear if I autoplayed the audio before any action on the page from the user but it's not the case for me. I even did this:
componentDidMount() {
let audio = new Audio('sounds/beep.wav');
audio.load();
audio.muted = true;
document.addEventListener('click', () => {
audio.muted = false;
audio.play();
});
}
But the message still appears and the sound doesn't play. What should I do?
The audio is an HTMLMediaElement, and calling play() returns a Promise, so needs to be handled. Depending on the size of the file, it's usually ready to go, but if it is not loaded (e.g pending promise), it will throw the "AbortError" DOMException.
You can check to see if its loaded first, then catch the error to turn off the message. For example:
componentDidMount() {
this.audio = new Audio('sounds/beep.wav')
this.audio.load()
this.playAudio()
}
playAudio() {
const audioPromise = this.audio.play()
if (audioPromise !== undefined) {
audioPromise
.then(_ => {
// autoplay started
})
.catch(err => {
// catch dom exception
console.info(err)
})
}
}
Another pattern that has worked well without showing that error is creating the component as an HTML audio element with the autoPlay attribute and then rendering it as a component where needed. For example:
const Sound = ( { soundFileName, ...rest } ) => (
<audio autoPlay src={`sounds/${soundFileName}`} {...rest} />
)
const ComponentToAutoPlaySoundIn = () => (
<>
...
<Sound soundFileName="beep.wav" />
...
</>
)
Simple error tone
If you want something as simple as playing a simple error tone (for non-visual feedback in a barcode scanner environment, for instance), and don't want to install dependencies, etc. - it can be pretty simple. Just link to your audio file:
import ErrorAudio from './error.mp3'
And in the code, reference it, and play it:
var AudioPlay = new Audio (ErrorAudio);
AudioPlay.play();
Only discovered this after messing around with more complicated options.
I think it would be better to use this component (https://github.com/justinmc/react-audio-player) instead of a direct dom manipulation
It is Very Straightforward Indeed
const [, setMuted] = useState(true)
useEffect(() => {
const player = new Audio(./sound.mp3);
const playPromise = player.play();
if (playPromise !== undefined)
return playPromise.then(() => setMuted(false)).catch(() => setMuted(false));
}, [])
I hope It works now :)

Change playout delay in WebRTC stream

I'm trying to cast a live MediaStream (Eventually from the camera) from peerA to peerB and I want peerB to receive the live stream in real time and then replay it with an added delay. Unfortunately in isn't possible to simply pause the stream and resume with play since it jump forward to the live moment.
So I have figured out that I can use MediaRecorder + SourceBuffer rewatch the live stream. Record the stream and append the buffers to MSE (SourceBuffer) and play it 5 seconds later.
This works grate on the local device (stream). But when I try to use Media Recorder on the receivers MediaStream (from pc.onaddstream) is looks like it gets some data and it's able to append the buffer to the sourceBuffer. however it dose not replay. sometime i get just one frame.
const [pc1, pc2] = localPeerConnectionLoop()
const canvasStream = canvas.captureStream(200)
videoA.srcObject = canvasStream
videoA.play()
// Note: using two MediaRecorder at the same time seem problematic
// But this one works
// stream2mediaSorce(canvasStream, videoB)
// setTimeout(videoB.play.bind(videoB), 5000)
pc1.addTransceiver(canvasStream.getTracks()[0], {
streams: [ canvasStream ]
})
pc2.onaddstream = (evt) => {
videoC.srcObject = evt.stream
videoC.play()
// Note: using two MediaRecorder at the same time seem problematic
// THIS DOSE NOT WORK
stream2mediaSorce(evt.stream, videoD)
setTimeout(() => videoD.play(), 2000)
}
/**
* Turn a MediaStream into a SourceBuffer
*
* #param {MediaStream} stream Live Stream to record
* #param {HTMLVideoElement} videoElm Video element to play the recorded video in
* #return {undefined}
*/
function stream2mediaSorce (stream, videoElm) {
const RECORDER_MIME_TYPE = 'video/webm;codecs=vp9'
const recorder = new MediaRecorder(stream, { mimeType : RECORDER_MIME_TYPE })
const mediaSource = new MediaSource()
videoElm.src = URL.createObjectURL(mediaSource)
mediaSource.onsourceopen = (e) => {
sourceBuffer = mediaSource.addSourceBuffer(RECORDER_MIME_TYPE);
const fr = new FileReader()
fr.onerror = console.log
fr.onload = ({ target }) => {
console.log(target.result)
sourceBuffer.appendBuffer(target.result)
}
recorder.ondataavailable = ({ data }) => {
console.log(data)
fr.readAsArrayBuffer(data)
}
setInterval(recorder.requestData.bind(recorder), 1000)
}
console.log('Recorder created')
recorder.start()
}
Do you know why it won't play the video?
I have created a fiddle with all the necessary code to try it out, the javascript tab is the same code as above, (the html is mostly irrelevant and dose not need to be changed)
Some try to reduce the latency, but I actually want to increase it to ~10 seconds to rewatch something you did wrong in a golf swing or something, and if possible avoid MediaRecorder altogether
EDIT:
I found something called "playout-delay" in some RTC extension
that allows the sender to control the minimum and maximum latency from capture to render time
https://webrtc.org/experiments/rtp-hdrext/playout-delay/
How can i use it?
Will it be of any help to me?
Update, there is new feature that will enable this, called playoutDelayHint.
We want to provide means for javascript applications to set their preferences on how fast they want to render audio or video data. As fast as possible might be beneficial for applications which concentrates on real time experience. For others additional data buffering may provide smother experience in case of network issues.
Refs:
https://discourse.wicg.io/t/hint-attribute-in-webrtc-to-influence-underlying-audio-video-buffering/4038
https://bugs.chromium.org/p/webrtc/issues/detail?id=10287
Demo: https://jsfiddle.net/rvekxns5/
doe i was only able to set max 10s in my browser but it's more up to the UA vendor to do it's best it can with the resources available
import('https://jimmy.warting.se/packages/dummycontent/canvas-clock.js')
.then(({AnalogClock}) => {
const {canvas} = new AnalogClock(100)
document.querySelector('canvas').replaceWith(canvas)
const [pc1, pc2] = localPeerConnectionLoop()
const canvasStream = canvas.captureStream(200)
videoA.srcObject = canvasStream
videoA.play()
pc1.addTransceiver(canvasStream.getTracks()[0], {
streams: [ canvasStream ]
})
pc2.onaddstream = (evt) => {
videoC.srcObject = evt.stream
videoC.play()
}
$dur.onchange = () => {
pc2.getReceivers()[0].playoutDelayHint = $dur.valueAsNumber
}
})
<!-- all the irrelevant part, that you don't need to know anything about -->
<h3 style="border-bottom: 1px solid">Original canvas</h3>
<canvas id="canvas" width="100" height="100"></canvas>
<script>
function localPeerConnectionLoop(cfg = {sdpSemantics: 'unified-plan'}) {
const setD = (d, a, b) => Promise.all([a.setLocalDescription(d), b.setRemoteDescription(d)]);
return [0, 1].map(() => new RTCPeerConnection(cfg)).map((pc, i, pcs) => Object.assign(pc, {
onicecandidate: e => e.candidate && pcs[i ^ 1].addIceCandidate(e.candidate),
onnegotiationneeded: async e => {
try {
await setD(await pc.createOffer(), pc, pcs[i ^ 1]);
await setD(await pcs[i ^ 1].createAnswer(), pcs[i ^ 1], pc);
} catch (e) {
console.log(e);
}
}
}));
}
</script>
<h3 style="border-bottom: 1px solid">Local peer (PC1)</h3>
<video id="videoA" muted width="100" height="100"></video>
<h3 style="border-bottom: 1px solid">Remote peer (PC2)</h3>
<video id="videoC" muted width="100" height="100"></video>
<label> Change playoutDelayHint
<input type="number" value="1" id="$dur">
</label>

Categories

Resources