I know this has been asked before, but I am new to JavaScript and after having read other answers I can't understand specifically why my method isn't working. The first track that plays is random, but then when the song ends, the same track repeats over and over again instead of choosing a different random track. If audio.play chooses a random track the first time, why doesn't it choose a random track again when the song ends, but instead loops the same track? Help appreciated:
var audio_files = [
"TRACKS/1.mp3",
"TRACKS/2.mp3",
"TRACKS/3.mp3"
]
var random_file = audio_files[Math.floor(Math.random() * audio_files.length)];
var audio = new Audio(random_file);
audio.play();
audio.addEventListener('ended', function(){
audio.play();
}
It would be easier to shuffle the array and simply iterate on it to get each file. With this solution, you will not get the same file twice because of the random solution.
Once you get to the end, do the same thing, shuffle the array and iterate again. Like this, the list will change giving the impression to selecting a different file in a random manner (but truly simply iterating).
Here is a pseudo code to it
function readFiles(){
array = shuffle(array);
for(var i = 0; i < array.length; ++i){
play(array[i]);
}
readFiles(); //to read undefinitly
}
Here and here, you will find a great threads to shuffle the array. I will not do it again, just follow the answer there.
In your case, you don't want a loop but you will use a counter the get the next file and shuffle the array again once you get to the end
function getNextFile(){
if(currentFile >= array.length){ //reach the end
shuffle(array);
currentFile = 0;
}
return array[currentFile+];
}
Your code need to be like this:
var audio_files = [
"TRACKS/1.mp3",
"TRACKS/2.mp3",
"TRACKS/3.mp3"
]
function random_file(){
return audio_files[Math.floor(Math.random() * audio_files.length)];
}
var audio = new Audio( random_file() );
audio.play();
audio.addEventListener('ended', function(){
audio.play( random_file() );
}
Your listner may be like this if player have another way to specify file for existing payer
audio.addEventListener('ended', function(){
audio.src = random_file();
audio.play();
}
or if your player have no such api method the only way is
function listner() {
audio = new Audio( random_file() );
audio.play();
audio.addEventListener('ended', listner)
}
audio.addEventListener('ended', listner)
Just install the famous underscoreJS module on client or server side with Node.js and invoke the sample function;
var random_file = _.sample(audio_files);
Open demo in codepen
To play the files in the playlist infinitely with the help of the cool underscore module, you can do the following
If you are a nodejs developer you can install the module using the following command
npm i underscore
To use it in nodejs you can do the following
const _ = require("underscore")
To use it on the frontend you can put the following script just before the closing tag of the html body tag
<script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.13.2/underscore-min.js"></script>
Now lets get started with the logic
Here is my html file
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>SHUFFLED PLAYLIST</title>
</head>
<body>
<button id="playsong">play playlist</button>
<script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.13.2/underscore-min.js"></script>
</body>
</html>
Here is my javascript file
let played_files = [];
let playsong = document.querySelector("#playsong");
let audio = new Audio();
let audio_files = [
"songs/song_num1.mp3",
"songs/song_num2.mp3",
"songs/song_num3.m4a",
"songs/song_num4.mp3",
];
function random_file() {
let file = _.sample(audio_files);
let allSongsPlayed = _.size(played_files) === _.size(audio_files);
if (_.contains(played_files, file) || allSongsPlayed) {
if (allSongsPlayed) {
played_files = [];
}
return random_file();
} else {
played_files.push(file);
return file;
}
}
function playSong() {
let file = random_file();
audio.src = file;
audio.play();
}
playsong.addEventListener("click", () => {
playSong();
});
audio.addEventListener("ended", playSong);
Related
So I'm trying to make a website that will play music notes from A-G at random and your goal is to guess them, I'm planning on storing the note mp3s in a separate folder, but at the moment I'm just trying to get it to take one song from the said folder, but it's not working, it can take mp3s from it if inputted manually, but not automatically.
Doesn't work:
const fs = require(`fs`);
let tones = fs.readdirSync(`./tones/`).filter((f) => { return f.endsWith(`.mp3`); });
let i = Math.floor(Math.random() * tones.length); //This is always equal to zero because tones only has one file
function start() {
let audio = new Audio(`./tones/${tones[i]}`);
audio.play();
};
Does work:
const fs = require(`fs`);
let tones = fs.readdirSync(`./tones/`).filter((f) => { return f.endsWith(`.mp3`); });
let i = Math.floor(Math.random() * tones.length);
function start() {
let audio = new Audio(`./tones/Sample.mp3`);
audio.play();
};
Any ideas?
I want to create an audio editor where you can connect nodes together to create custom audio components. Every time the nodes change, they get compiled into javascript and then will be run by a new Function() to get better performance. I just read up that there is the possibility to create an AudioWorklet, which runs on a separate thread. Now I am wondering if there is a possibility of combining both ideas in a way where my algorithm gets passed to the AudioWorklet as a string of javascript code, where it then gets put into a function using new Function(codeString) inside of the constructor. Then the audioworklet's process() function will call the custom function somehow.
Is this possible in some way, or am I asking for too much? I would like to get a "yes, that's possible" or a "no, sorry" before I spend hours trying to get it to work...
Thanks for your help,
dogefromage
With the help of #AKX's comment, I crafted together this solution. The code inside the string will later be replaced by a compiler.
function generateProcessor()
{
return (`
class TestProcessor extends AudioWorkletProcessor
{
process(inputs, outputs)
{
const input = inputs[0];
const output = outputs[0];
for (let channel = 0; channel < output.length; ++channel) {
for (let i = 0; i < output[channel].length; i++) {
output[channel][i] = 0.01 * Math.acos(input[channel][i]);
}
}
return true;
}
}
registerProcessor('test-processor', TestProcessor);
`);
}
const button = document.querySelector('#button');
button.addEventListener('click', async (e) =>
{
const audioContext = new AudioContext();
await audioContext.audioWorklet.addModule(
URL.createObjectURL(new Blob([
generateProcessor()
], {type: "application/javascript"})));
const oscillator = new OscillatorNode(audioContext);
const testProcessor = new AudioWorkletNode(audioContext, 'test-processor');
oscillator.connect(testProcessor).connect(audioContext.destination);
oscillator.start();
});
I am trying to get the total duration of audio from an array of audio paths.
Here is what the array would look like:
var sound_paths = ["1.mp3","2.mp3",...]
I have looked at this post, which helped:
how to get audio.duration value by a function
However, I do not know how to implement this over an array. The idea is that I want to loop over each audio file, get its duration, and add it to a "sum_duration" variable.
I cannot seem to figure out a way to do this over a for loop. I have tried Promises (which I am admittedly new at): (Note the functions come from a class)
getDuration(src,cb){
// takes a source audio file and a callback function
var audio = new Audio();
audio.addEventListener("loadedmetadata",()=>{
cb(audio.duration);
});
audio.src = src;
}
getAudioArrayDuration(aud_path_arr){
// takes in an array of audio paths, finds the total audio duration
// in seconds
return new Promise((resolve)=>{
var duration = 0;
for(const aud_path in aud_path_arr){
var audio = new Audio();
audio.src = aud_path;
audio.onloadedmetadata = ()=>{
console.log(audio.duration);
duration += audio.duration;
}
resolve(duration);
}
});
}
However, this obviously does not work, and will just return the duration value of 0.
How can I loop over audio files and return the total audio duration of each file summed?
I think, in this case, working with promises is the right approach, time to get used to them ;) Try to remember, a promise will fullfil your wish in the near future. What makes your problem harder is that you have an array of files to check, each will need to separately be covered by it's own Promise, just so that your program can know when all of them have been fullfilled.
I always call my 'promised' getters 'fetch...', that way I know it'll return a promise instead of a direct value.
function fetchDuration(path) {
return new Promise((resolve) => {
const audio = new Audio();
audio.src = path;
audio.addEventListener(
'loadedmetadata',
() => {
// To keep a promise maintainable, only do 1
// asynchronous activity for each promise you make
resolve(audio.duration)
},
);
})
}
function fetchTotalDuration(paths) {
// Create an array of promises and wait until all have completed
return Promise.all(paths.map((path) => fetchDuration(path)))
// Reduce the results back to a single value
.then((durations) => durations.reduce(
(acc, duration) => acc + duration,
0,
))
;
}
At some point, your code is going to have to deal with this asynchronous stuff, I honestly believe that Promises are the easiest way to do this. It takes a little getting used to, but it'll be worth it in the end. The above could be used in your code something along the lines of:
window.addEventListener('DOMContentLoaded', () => {
fetchTotalDuration(["1.mp3","2.mp3",...])
.then((totalDuration) => {
document.querySelector('.player__total-duration').innerHTML = totalDuration;
})
;
});
I hacked this together real quick, so you'll have to adapt it to your function structure, but it's a working code snippet that should send you in the right direction.
Simply keep track of which audio files have been loaded, and if that matches the number of audio files queried, you call the callback with the total duration.
You should also take failing requests into account, so that if the loadedmetadata event never fires, you can react accordingly (by either falling back to 0 duration for that file, or throwing an Exception, etc.).
const cb = function(duration) {
console.log(`Total duration: ${duration}`);
};
let sound_paths = ["https://rawcdn.githack.com/anars/blank-audio/92f06aaa1f1f4cae365af4a256b04cf9014de564/5-seconds-of-silence.mp3","https://rawcdn.githack.com/anars/blank-audio/92f06aaa1f1f4cae365af4a256b04cf9014de564/2-seconds-of-silence.mp3"];
let totalDuration = 0;
let loadedSounds = [];
sound_paths.map(src => {
const audio = new Audio();
audio.addEventListener("loadedmetadata", ()=>{
totalDuration += audio.duration;
loadedSounds.push(audio);
if ( loadedSounds.length === sound_paths.length ) {
cb( totalDuration );
}
});
audio.src = src;
});
I have the following JS code for a canvas based game.
var EXPLOSION = "sounds/explosion.wav";
function playSound(str, vol) {
var snd = new Audio();
snd.src = str;
snd.volume = vol;
snd.play();
}
function createExplosion() {
playSound(EXPLOSION, 0.5);
}
This works, however it sends a server request to download the sound file every time it is called. Alternatively, if I declare the Audio object beforehand:
var snd = new Audio();
snd.src = EXPLOSION;
snd.volume = 0.5;
function createExplosion() {
snd.play();
}
This works, however if the createExplosion function is called before the sound is finished playing, it does not play the sound at all. This means that only a single playthrough of the sound file is allowed at a time - and in scenarios that multiple explosions are taking place it doesn't work at all.
Is there any way to properly play an audio file multiple times overlapping with itself?
I was looking for this for ages in a tetris game i'm building and I think this solution is the best.
function playSoundMove() {
var sound = document.getElementById("move");
sound.load();
sound.play();
}
just have it loaded and ready to go.
You could just duplicate the node with cloneNode() and play() that duplicate node.
My audio element looks like this:
<audio id="knight-audio" src="knight.ogg" preload="auto"></audio>
and I have an onClick listener that does just that:
function click() {
const origAudio = document.getElementById("knight-audio");
const newAudio = origAudio.cloneNode()
newAudio.play()
}
And since the audio element isn't going to be displayed, you don't actually have to attach the node to anything.
I verified client-side and server-side that Chrome only tries to download the audio file once.
Caveats: I'm not sure about performance impacts, since this on my site this clip doesn't get played more than ~40x maximum for a page. You might have to clean up the audio nodes if you're doing something much larger than that?
Try this:
(function() {
var snds = {};
window.playSound(str,vol) {
if( !snds[str]) (snds[str] = new Audio()).src = str;
snds[str].volume = vol;
snds[str].play();
}
})();
Then the first time you call it it will fetch the sound, but every time after that it will reuse the same sound object.
EDIT: You can also preload with duplicates to allow the sound to play more than once at a time:
(function() {
var snds = {}
window.playSound = function(str,vol) {
if( !snds[str]) {
snds[str] = [new Audio()];
snds[str][0].src = str;
}
var snd = snds[str], pointer = 0;
while( snd[pointer].playing) {
pointer++;
if( pointer >= snd.length) {
snd.push(new Audio());
snd[pointer].src = str;
}
}
snd[pointer].volume = vol;
snd[pointer].play();
};
})();
Note that this will send multiple requests if you play the sound overlapping itself too much, but it should return Not Modified very quickly and will only do so if you play it more times than you have previously.
In my game i'm using preoading but after the sound is initiated (its not so smart to not preload at all or preload everything on page load, some sound hasn't played in some gameplay at all, why to load them)
const audio {};
audio.dataload = {'entity':false,'entityes':[],'n':0};
audio.dataload.ordernum = function() {
audio.dataload.n = (audio.dataload.n + 1)%10;
return audio.dataload.n;
}
audio.dataload.play = function() {
audio.dataload.entity = new Audio('/some.mp3');
for (let i = 0; i<10;i++) {
audio.dataload.entityes.push(audio.dataload.entity.cloneNode());
}
audio.dataload.entityes[audio.dataload.ordernum()].play();
}
audio.dataload.play() // plays sound and preload sounds to memory when it isn't
I've created a class that allows for layered audio. This is very similar to other answers where it creates another node with the same src, but this class will only do that if necessary. If it has created a node already that has been completed, it will replay that existing node.
Another tweak to this is that initially fetch the audio and use the URL of the blob. I do this for efficiency; so the src doesn't have to be fetched externally every single time a new node is created.
class LayeredAudio {
url;
samples = [];
constructor(src){
fetch(src)
.then(response => response.blob())
.then((blob) => {
this.url = URL.createObjectURL(blob);
this.samples[0] = new Audio(this.url);
});
}
play(){
if(!this.samples.find(e => e.paused)?.play()){
this.samples.push(new Audio(this.url))
this.samples[this.samples.length - 1].play()
}
}
}
const aud = new LayeredAudio("URL");
aud.play()
Relying more on memory than process time, we can make an array of multiple clones of the Audio and then play them by order:
function gameSnd() {
tick_wav = new Audio('sounds/tick.wav');
victory_wav = new Audio('sounds/victory.wav');
counter = 0;
ticks = [];
for (var i = 0; i<10;i++)
ticks.push(tick_wav.cloneNode());
tick = function(){
counter = (counter + 1)%10;
ticks[counter].play();
}
victory = function(){
victory_wav.play();
}
}
When I tried some of the other solutions there was some delay, but I may have found a better alternative. This will plow through a good chunk of memory if you make the audio array's length high. I doubt you will need to play the same audio more than 10 times at the same time, but if you do just make the array length longer.
var audio = new Array(10);
// The length of the audio array is how many times
// the audio can overlap
for (var i = 0; i < audio.length; i++) {
audio[i] = new Audio("your audio");
}
function PlayAudio() {
// Whenever you want to play it call this function
audio[audioIndex].play();
audioIndex++;
if(audioIndex > audio.length - 1) {
audioIndex = 0;
}
}
I have found this to be the simples way to overlap the same audio over itself
<button id="btn" onclick="clickMe()">ding</button>
<script>
function clickMe() {
const newAudio = new Audio("./ding.mp3")
newAudio.play()
}
I often read that it's not possible to pause/resume audio files with the Web Audio API.
But now I saw a example where they actually made it possible to pause and resume it. I tried to figure out what how they did it. I thought maybe source.looping = falseis the key, but it wasn't.
For now my audio is always re-playing from the start.
This is my current code
var context = new (window.AudioContext || window.webkitAudioContext)();
function AudioPlayer() {
this.source = context.createBufferSource();
this.analyser = context.createAnalyser();
this.stopped = true;
}
AudioPlayer.prototype.setBuffer = function(buffer) {
this.source.buffer = buffer;
this.source.looping = false;
};
AudioPlayer.prototype.play = function() {
this.source.connect(this.analyser);
this.analyser.connect(context.destination);
this.source.noteOn(0);
this.stopped = false;
};
AudioPlayer.prototype.stop = function() {
this.analyser.disconnect();
this.source.disconnect();
this.stopped = true;
};
Does anybody know what to do, to get it work?
Oskar's answer and ayke's comment are very helpful, but I was missing a code example. So I wrote one: http://jsfiddle.net/v3syS/2/ I hope it helps.
var url = 'http://thelab.thingsinjars.com/web-audio-tutorial/hello.mp3';
var ctx = new webkitAudioContext();
var buffer;
var sourceNode;
var startedAt;
var pausedAt;
var paused;
function load(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
request.onload = function() {
ctx.decodeAudioData(request.response, onBufferLoad, onBufferError);
};
request.send();
};
function play() {
sourceNode = ctx.createBufferSource();
sourceNode.connect(ctx.destination);
sourceNode.buffer = buffer;
paused = false;
if (pausedAt) {
startedAt = Date.now() - pausedAt;
sourceNode.start(0, pausedAt / 1000);
}
else {
startedAt = Date.now();
sourceNode.start(0);
}
};
function stop() {
sourceNode.stop(0);
pausedAt = Date.now() - startedAt;
paused = true;
};
function onBufferLoad(b) {
buffer = b;
play();
};
function onBufferError(e) {
console.log('onBufferError', e);
};
document.getElementById("toggle").onclick = function() {
if (paused) play();
else stop();
};
load(url);
In current browsers (Chrome 43, Firefox 40) there are now 'suspend' and 'resume' methods available for AudioContext:
var audioCtx = new AudioContext();
susresBtn.onclick = function() {
if(audioCtx.state === 'running') {
audioCtx.suspend().then(function() {
susresBtn.textContent = 'Resume context';
});
} else if(audioCtx.state === 'suspended') {
audioCtx.resume().then(function() {
susresBtn.textContent = 'Suspend context';
});
}
}
(modified example code from https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/suspend)
Actually the web-audio API can do the pause and play task for you. It knows the current state of the audio context (running or suspended), so you can do this in this easy way:
susresBtn.onclick = function() {
if(audioCtx.state === 'running') {
audioCtx.suspend()
} else if(audioCtx.state === 'suspended') {
audioCtx.resume()
}
}
I hope this can help.
Without spending any time checking the source of your example, I'd say you'll want to use the noteGrainOn method of the AudioBufferSourceNode (https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#methodsandparams-AudioBufferSourceNode)
Just keep track of how far into the buffer you were when you called noteOff, and then do noteGrainOn from there when resuming on a new AudioBufferSourceNode.
Did that make sense?
EDIT:
See comments below for updated API calls.
EDIT 2, 2019: See MDN for updated API calls; https://developer.mozilla.org/en-US/docs/Web/API/AudioBufferSourceNode/start
For chrome fix, every time you want to play sound, set it like:
if(audioCtx.state === 'suspended') {
audioCtx.resume().then(function() {
audio.play();
});
}else{
audio.play();
}
The lack of a built-in pause functionality in the WebAudio API seems like a major oversight to me. Possibly, in the future it will be possible to do this using the planned MediaElementSource, which will let you hook up an element (which supports pausing) to Web Audio. For now, most workarounds seem to be based on remembering playback time (such as described in imbrizi's answer). Such a workaround has issues when looping sounds (does the implementation loop gapless or not?), and when you allow dynamically change the playbackRate of sounds (as both affect timing). Another, equally hack-ish and technically incorrect, but much simpler workaround you can use is:
source.playbackRate = paused?0.0000001:1;
Unfortunately, 0 is not a valid value for playbackRate (which would actually pause the sound). However, for many practical purposes, some very low value, like 0.000001, is close enough, and it won't produce any audible output.
UPDATE: This is only valid for Chrome. Firefox (v29) does not yet implement the MediaElementAudioSourceNode.mediaElement property.
Assuming that you already have the AudioContext reference and your media source (e.g. via AudioContext.createMediaElementSource() method call), you can call MediaElement.play() and MediaElement.pause()on your source, e.g.
source.mediaElement.pause();
source.mediaElement.play();
No need for hacks and workarounds, it's supported.
If you are working with an <audio> tag as your source, you should not call pause directly on the audio element in your JavaScript, that will stop playback.
In 2017, using ctx.currentTime works well for keeping track of the point in the song. The code below uses one button (songStartPause) that toggles between a play & pause button. I used global variables for simplicity's sake. The variable musicStartPoint keeps track of what time you're at in the song. The music api keeps track of time in seconds.
Set your initial musicStartPoint at 0 (beginning of the song)
var ctx = new webkitAudioContext();
var buff, src;
var musicLoaded = false;
var musicStartPoint = 0;
var songOnTime, songEndTime;
var songOn = false;
songStartPause.onclick = function() {
if(!songOn) {
if(!musicLoaded) {
loadAndPlay();
musicLoaded = true;
} else {
play();
}
songOn = true;
songStartPause.innerHTML = "||" //a fancy Pause symbol
} else {
songOn = false;
src.stop();
setPausePoint();
songStartPause.innerHTML = ">" //a fancy Play symbol
}
}
Use ctx.currentTime to subtract the time the song ends from when it started, and append this length of time to however far you were in the song initially.
function setPausePoint() {
songEndTime = ctx.currentTime;
musicStartPoint += (songEndTime - songOnTime);
}
Load/play functions.
function loadAndPlay() {
var req = new XMLHttpRequest();
req.open("GET", "//mymusic.com/unity.mp3")
req.responseType = "arraybuffer";
req.onload = function() {
ctx.decodeAudioData(req.response, function(buffer) {
buff = buffer;
play();
})
}
req.send();
}
function createBuffer() {
src = ctx.createBufferSource();
src.buffer = buff;
}
function connectNodes() {
src.connect(ctx.destination);
}
Lastly, the play function tells the song to start at the specified musicStartPoint (and to play it immediately), and also sets the songOnTime variable.
function play(){
createBuffer()
connectNodes();
songOnTime = ctx.currentTime;
src.start(0, musicStartPoint);
}
*Sidenote: I know it might look cleaner to set songOnTime up in the click function, but I figure it makes sense to grab the time code as close as possible to src.start, just like how we grab the pause time as close as possible to src.stop.
I didn't follow the full discussion, but I will soon. I simply headed over HAL demo to understand. For those who now do like me, I would like to tell
1 - how to make this code working now.
2 - a trick to get pause/play, from this code.
1 : replace noteOn(xx) with start(xx) and put any valid url in sound.load(). I think it's all I've done. You will get a few errors in the console that are pretty directive. Follow them. Or not : sometimes you can ignore them, it works now : it's related to the -webkit prefix in some function. New ones are given.
2 : at some point, when it works, you may want to pause the sound.
It will work. But, as everybody knows, a new pressing on play would raise an error. As a result, the code in this.play() after the faulty source_.start(0) is not executed.
I simply enclosed those line in a try/catch :
this.play = function() {
analyser_ = context_.createAnalyser();
// Connect the processing graph: source -> analyser -> destination
source_.connect(analyser_);
analyser_.connect(context_.destination);
try{
source_.start(0);
}
catch(e){
this.playing = true;
(function callback(time) {
processAudio_(time);
reqId_ = window.webkitRequestAnimationFrame(callback);
})();
}
And it works : you can use play/pause.
I would like to mention that this HAL simulation is really incredible. Follow those simple steps, it's worth it !