JavaScript play() function does not working in Chrome - javascript

I created an audio object and want to play it when user leave the window. So, my code is here:
$('body').on('mouseleave', function(){
var audio = new Audio( 'quite-impressed.mp3' )
audio.play()
})
It works well in Firefox. It also works in Chrome if I click in the page and then leave mouse outside of the body. But, when I leave the mouse without clicking in the page an error showed in the console and the audio does not play
Uncaught (in promise) DOMException: play() failed because the user didn't interact with the document first. https://developers.google.com/web/updates/2017/09/autoplay-policy-changes
But, in this example site it works fine without interacting with the page. How can I make it possible? Thanks in advance.

It seems they use an AudioContext to play that sound.
Chrome did came back a few months ago about blocking the AudioContext API, because a lot of fair uses were not prepared for such restriction and thus got broken by it.
But M71, which will get released in December 2018 will reenable that restriction, you can read about it here: https://developers.google.com/web/updates/2017/09/autoplay-policy-changes#webaudio
// this will work in Chrome < 70 but not af
onmouseout = e => {
const ctx = new AudioContext();
const osc = ctx.createOscillator();
osc.connect(ctx.destination);
osc.start(0);
osc.stop(1);
}
Outsourced live example, since Stacksnippets are always granted the user gesture anyway: https://jsfiddle.net/zy3ev8ka/

Try this:
window.audio = new Audio( 'quite-impressed.mp3' )
$('body').on('mouseleave', function(){
audio.play()
})

This worked for me on Chrome 77:
On address bar: chrome://settings/content/sound
Turn off "Allow sites to play sound (recommended)"
Turn it on again

if your js play function is not running and if your code is correct then this may help,you have to allow your browser to get that sound file
just goto settings =>privacy and security => site-settings =>sound.
and then add your local url at add to play section..

Related

Is there a way to get OscillatorNode to produce a sound on an iOS device?

I'm working on a music web app that has a piano keyboard. When a user presses a piano key, I'm using OscillatorNode to play a brief tone corresponding to the key:
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
function playNote(note) {
let oscillator;
let freq = notes[note];
console.debug(note + " (" + freq + " Hz)");
oscillator = audioCtx.createOscillator(); // create Oscillator node
oscillator.type = wavetypeEl.val(); // triangle wave by default
oscillator.frequency.setValueAtTime(freq, audioCtx.currentTime); // freq = value in hertz
oscillator.connect(audioCtx.destination);
oscillator.start();
oscillator.stop(audioCtx.currentTime + 0.5);
}
$('#keyboard button').on('click', (e) => {
playNote(e.target.dataset.note);
});
This works on all the desktop and Android browsers I've tried, but iOS stubbornly refuses to play any sound. I see that I need a user interaction to "unlock" an AudioContext on iOS, but I would have thought calling playNote() from my click function would have done the trick.
According to Apple, I should be able to use noteOn() on my oscillator object, instead of oscillator.start() the way I've got it in my example. But that doesn't seem to be a valid method.
I must be missing something simple here. Anybody know?
If everything seems to be working fine it could be that the device itself is on mute. For some reason (or for no reason at all) Safari doesn't play any sound coming from the Web Audio API when the device is muted. But it plays everything else.
There are some hacky ways to circumvent this bug which basically work by playing something with an audio element first before using the Web Audio API.
unmute-ios-audio is for example a library which implements the hack mentioned above.
I have an iPhone XS Max and the demo works on it producing sound as you have it right now... I've also read that for iOS the element needs both onClick handler and a style of {cursor: pointer} set to work properly.(as of a few years ago) seems like it's working though.

Speech gets cut off in firefox when page is auto-refreshed but not in google chrome

I have this problem where in firefox the speech gets cut off if the page is auto-refreshed, but in google chrome it finishes saying the speech even if the page is auto-refreshed. How do I fix it so that the speech doesn't get cut off in firefox even when the page is auto-refreshed?
msg = new SpeechSynthesisUtterance("please finish saying this entire sentence.");
window.speechSynthesis.speak(msg);
(function ($) {
'use strict';
if (window == window.top) {
var body = $('body').empty();
var myframe = $('<iframe>')
.attr({ src: location.href })
.css({ height: '95vh', width: '100%' })
.appendTo(body)
.on('load', function () {
var interval;
interval = 750;
setTimeout(function () {
myframe.attr({ src: location.href });
}, interval);
});
}
})(jQuery);
I have this problem where in firefox the speech gets cut off if the
page is auto-refreshed, but in google chrome it finishes saying the
speech even if the page is auto-refreshed.
The described behaviour for Firefox is a sane expected implementation.
Browsing the source code of Firefox and Chromium the implementation of speechSynthesis.speak() is based on a socket connection with the local speech server. That server at *nix is usually speech-dispatcher or speechd (speech-dispatcher). See How to programmatically send a unix socket command to a system server autospawned by browser or convert JavaScript to C++ souce code for Chromium? for description of trying to implement SSML parsing at Chromium.
Eventually decided to write own code to achieve that requirement using JavaScript according to the W3C specification SpeechSynthesisSSMLParser after asking more than one question at SE sites, filing issues and bugs and posting on W3C mailings lists without any evidence that SSML parsing would ever be included as part of the Web Speech API.
Once that connection is initiated a queue is created for calls to .speak(). Even when the connection is closed Task Manager might still show the active process registered by the service.
The process at Chromium/Chrome is not without bugs, the closest that have filed to what is being described at the question is
Issue 797624: "speak speak slash" is audio output of .speak() following two calls to .speak(), .pause() and .resume()
Why hasn't Issue 88072 and Issue 795371 been answered? Are Internals>SpeechSynthesis and Blink>Speech dead? (for possible reason why "but in google chrome it finishes saying the speech even if the page is auto-refreshed." is still possible at Chrome)
.volume property issues
Issue 797512: Setting SpeechSynthesisUtterance.volume does not change volume of audio output of speechSynthesis.speak() (Chromium/Chrome)
Bug 1426978 Setting SpeechSynthesisUtterance.volume does not change volume of audio output of speechSynthesis.speak() (Firefox)
The most egregious issue being Chromium/Chrome webkitSpeechReconition implementation which records the users' audio and posts that audio data to a remote service, where a transcript is returned to the browser - without explicitly notifying the user that is taking place, marked WONT FIX
Issue 816095: Does webkitSpeechRecognition send recorded audio to a remote web service by default?
Relevant W3C Speech API issues at GitHub
The UA should be able to disallow speak() from autoplaying #27
Precisely define when speak() should fail due to autoplay rules #35 (ironically, relevant to the reported behaviour at Chromium/Chrome and output described at this question, see Web Audio, Autoplay Policy and Games and Autoplay Policy Changes)
Intent to Deprecate: speechSynthesis.speak without user activation
Summary
The SpeechSynthesis API is actively being abused on the web. We don’t have hard data on abuse, but since other autoplay avenues are
starting to be closed, abuse is anecdotally moving to the Web Speech
API, which doesn't follow autoplay rules.
After deprecation, the plan is to cause speechSynthesis.speak to
immediately fire an error if specific autoplay rules are not
satisfied. This will align it with other audio APIs in Chrome.
Timing of SpeechSynthesis state changes not defined #39
Timing of SpeechSynthesisUtterance events firing not defined #40
Clarify what happens if two windows try to speak #47
In summary, would not describe the behaviour at Firefox as a "problem", but the behaviour at Chrome as being a potential "problem".
Diving in to W3C Web Speech API implementation at browsers is not a trivial task. For several reasons. Including the apparent focus, or available option of, commercial TTS/SST services and proprietary, closed-source implementations of speech synthesis and speech recognition in "smart phones"; in lieu of fixing the various issues with the actual deployment of the W3C Web Speech API at modern browsers.
The maintainers of speechd (speech-dispatcher) are very helpful with regards to the server side (local speech-dispatcher socket).
Cannot speak for Firefox maintainers. Would estimate it is unlikely that if a bug is filed relevant to the feature request of continuing execution of audio output by .speak() from reloaded window is consistent with recent autoplay policies implemented by browsers. Though you can still file a Firefox bug to ask if audio output (from any API or interface) is expected to continue during reload of the current window; and if there are any preferences or policies which can be set to override the described behaviour, as suggested at the answer by #zip. And get the answer from the implementers themselves.
There are individuals and groups that compose FOSS code which are active in the domain and willing to help SST/TTS development, many of which are active at GitHub, which is another option to ask questions about how to implement what you are trying to achieve specifically at Firefox browser.
Outside of asking implementers for the feature request, you can read the source code and try create one or more workarounds. Alternatives include using meSpeak.js, though that still does not necessarily address if Firefox is intentionally blocking audio output during reload of the window.
Not sure why there's a difference in behavior... guest271314 might be on to something in his answer. However, you may be able to prevent FF from stopping the tts by intercepting the reload event with a onbeforeunload handler and waiting for the utterance to finish:
msg = new SpeechSynthesisUtterance("say something");
window.speechSynthesis.speak(msg);
window.onbeforeunload = function(e) {
if(window.speechSynthesis.speaking){
event.preventDefault();
msg.addEventListener('end', function(event) {
//logic to continue unload here
});
}
};
EDITED: See more elegant solution with promises below initial answer!
Below snippet is a workaround to the browser inconsistencies found in Firefox, checking synth.speaking in the interval and only triggering a reload if it's false prevents the synth from cutting of prematurely:
(It does not NOT work properly in the SO snippet, I assume it doesn't like iFrames in iFrames or whatever, just copy paste the code in a file and open it with Firefox!)
<p>I'm in the body, but will be in an iFrame</p>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script>
var synth = window.speechSynthesis;
msg = new SpeechSynthesisUtterance("please finish saying this entire sentence.");
synth.speak(msg);
(function ($) {
'use strict';
if (window == window.top) {
var body = $('body').empty();
var myframe = $('<iframe>')
.attr({ src: location.href })
.css({ height: '95vh', width: '100%' })
.appendTo(body)
.on('load', function () {
var interval;
interval = setInterval(function () {
if (!synth.speaking) {
myframe.attr({ src: location.href });
clearInterval(interval);
}
}, 750);
});
}
})(jQuery);
</script>
A more elaborate solution could be to not have any setTimeout() or setInterval() at all, but use promises instead. Like this the page will simply reload whenever the message is done synthesizing, no matter how short or long it is. This will also prevent the "double"/overlapping-speech on the initial pageload. Not sure if this helps in your scenario, but here you go:
<button id="toggleSpeech">Stop Speaking!</button>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script>
if (window == window.top) {
window.speech = {
say: function(msg) {
return new Promise(function(resolve, reject) {
if (!SpeechSynthesisUtterance) {
reject('Web Speech API is not supported');
}
var utterance = new SpeechSynthesisUtterance(msg);
utterance.addEventListener('end', function() {
resolve();
});
utterance.addEventListener('error', function(event) {
reject('An error has occurred while speaking: ' + event.error);
});
window.speechSynthesis.speak(utterance);
});
},
speak: true,
};
}
(function($) {
'use strict';
if (window == window.top) {
var body = $('body').empty();
var myframe = $('<iframe>')
.attr({ src: location.href })
.css({ height: '95vh', width: '100%' })
.appendTo(body)
.on('load', function () {
var $iframe = $(this).contents();
$iframe.find('#toggleSpeech').on('click', function(e) {
console.log('speaking will stop when the last sentence is done...');
window.speech.speak = !window.speech.speak;
});
window.speech.say('please finish saying this entire sentence.')
.then(function() {
if ( window.speech.speak ) {
console.log('speaking done, reloading iframe!');
myframe.attr({ src: location.href });
}
});
});
}
})(jQuery);
</script>
NOTE: Chrome (since v70) does NOT allow the immediate calling of window.speechSynthesis.speak(new SpeechSynthesisUtterance(msg)) anymore, you will get an error speechSynthesis.speak() without user activation is no longer allowed..., more details here. So technically the user would have to activate the script in Chrome to make it work!
Firefox:
First of all type and search for the “about: config” inside the browser by filling it in the address bar. This will take to another page where there will be a pop up asking to Take Any Risk, you need to accept that. Look for the preference named “accessibility.blockautorefresh” from the list and then right-click over that. There will be some options appearing as the list on the screen, select the Toggle option and then set it to True rather than False. This change will block the Auto Refresh on the Firefox browser. Remember that this option is revertable!

Javascript Web Audio API error: Failed to set the 'value' property on 'AudioParam': The provided float value is non-finite

I have no idea what this means. I have a feeling something in the Web Audio API has changed recently, and the browsers have implemented the change as my application was working fine with no errors the last time I checked it about 2 weeks ago. I have changed nothing in my code recently since the last time it was working.
The error I am getting is:
Uncaught TypeError: Failed to set the 'value' property on
'AudioParam': The provided float value is non-finite.
The line where the error occurs is this line:
gainNode.gain.value = volume;
The application can be viewed here:
http://aceroinc.ca/harmanKardon/
When the power button is hit, the app should turn on and a radio station should begin streaming. (In chrome only... the .aac format will not work in Firefox and I am aware of that)
I initialize the web audio api after the DOM loads...
window.AudioContext = window.AudioContext || window.webkitAudioContext;
context = new AudioContext();
source =context.createMediaElementSource(document.getElementById('audio'));
gainNode = context.createGain();
var gainDefault = gainNode.gain.defaultValue;
Then down inside the powerOn function I have:
gainNode.gain.value = currVolume;
which is causing the error.
When I check it in safari on my iphone the application works. So this seems to be a chrome issue.
On my desktop computer I have Chrome Version 42.0.2311.90 (64-bit), and it does not work.
On my laptop I have Chrome Version 41.0.2272.118 m and it does work.
On my iPhone I have Chrome Version 40.0.2214.73 and it does work.
Its because Float values can of type infinite and non-infinite, so in order to prevent this error, check whether the parsed float value is Infinite or not.
For more details on parseFloat() read here parseFloat()
function changeVolume(volume) {
let volume = parseFloat(params.volume);
if (isFinite(volume)) {
gainNode.gain.value = volume;
}
}
"volume" is your DIV element containing the volume knob, not a value. I think you meant "currVolume".

How to avoid playing back the sound that is being recorded

I'm talking about feedback - when you make a simple javascript application that opens a stream from the user and reads the frequency analysis (or whatever is it) it thorws all received data back to the headphones in both Google Chrome and Opera. Firefox is silent most of the time and randomly creates a huge mess with unstable feedback - it also closes the stream after few seconds. Generally the thing doesn't work in Firefox yet.
I created a fiddle. If your browser doesn't support it you'll just get error in the console I assume.
The critical part of the code is the function that is called when user accepts the request for the microphone access:
//Not sure why do I do this
var inputPoint = context.createGain();
// Create an AudioNode from the stream.
var source = context.createMediaStreamSource(stream);
source.connect(inputPoint);
//Analyser - this converts raw data into spectral analysis
window.analyser = context.createAnalyser();
//Mores stuff I know nothing about
analyser.fftSize = 2048;
//Sounds much like connecting nodes in MatLab, doesn't it?
inputPoint.connect(analyser);
analyser.connect(context.destination);
///THIS should probably make the sound silent (gain:0) but it doesn't
var zeroGain = context.createGain();
zeroGain.gain.value = 0.0;
//More connecting... are you already lost which node is which? Because I am.
inputPoint.connect(zeroGain);
zeroGain.connect(context.destination);
Zero gain idea is not mine, I have stolen it from simple sound recorder demo. But what works for them doesn't work for me.
The demo has also no problems in Firefox, like I do.
in function mediaGranted(stream) {...
comment out
..
Fiddle line #46: //analyser.connect(context.destination);
..
more info https://mdn.mozillademos.org/files/5081/WebAudioBasics.png
nice demo: http://mdn.github.io/voice-change-o-matic/

Sound effects in JavaScript / HTML5

I'm using HTML5 to program games; the obstacle I've run into now is how to play sound effects.
The specific requirements are few in number:
Play and mix multiple sounds,
Play the same sample multiple times, possibly overlapping playbacks,
Interrupt playback of a sample at any point,
Preferably play WAV files containing (low quality) raw PCM, but I can convert these, of course.
My first approach was to use the HTML5 <audio> element and define all sound effects in my page. Firefox plays the WAV files just peachy, but calling #play multiple times doesn't really play the sample multiple times. From my understanding of the HTML5 spec, the <audio> element also tracks playback state, so that explains why.
My immediate thought was to clone the audio elements, so I created the following tiny JavaScript library to do that for me (depends on jQuery):
var Snd = {
init: function() {
$("audio").each(function() {
var src = this.getAttribute('src');
if (src.substring(0, 4) !== "snd/") { return; }
// Cut out the basename (strip directory and extension)
var name = src.substring(4, src.length - 4);
// Create the helper function, which clones the audio object and plays it
var Constructor = function() {};
Constructor.prototype = this;
Snd[name] = function() {
var clone = new Constructor();
clone.play();
// Return the cloned element, so the caller can interrupt the sound effect
return clone;
};
});
}
};
So now I can do Snd.boom(); from the Firebug console and play snd/boom.wav, but I still can't play the same sample multiple times. It seems that the <audio> element is really more of a streaming feature rather than something to play sound effects with.
Is there a clever way to make this happen that I'm missing, preferably using only HTML5 and JavaScript?
I should also mention that, my test environment is Firefox 3.5 on Ubuntu 9.10. The other browsers I've tried - Opera, Midori, Chromium, Epiphany - produced varying results. Some don't play anything, and some throw exceptions.
HTML5 Audio objects
You don't need to bother with <audio> elements. HTML 5 lets you access Audio objects directly:
var snd = new Audio("file.wav"); // buffers automatically when created
snd.play();
There's no support for mixing in current version of the spec.
To play same sound multiple times, create multiple instances of the Audio object. You could also set snd.currentTime=0 on the object after it finishes playing.
Since the JS constructor doesn't support fallback <source> elements, you should use
(new Audio()).canPlayType("audio/ogg; codecs=vorbis")
to test whether the browser supports Ogg Vorbis.
If you're writing a game or a music app (more than just a player), you'll want to use more advanced Web Audio API, which is now supported by most browsers.
Web Audio API
Edit: As of December 2021, Web Audio API is essentially supported in Chrome, Firefox, Safari and all the other major browsers (excluding IE, of course).
As of July 2012, the Web Audio API is now supported in Chrome, and at least partly supported in Firefox, and is slated to be added to IOS as of version 6.
Although the Audio element is robust enough to be used programmatically for basic tasks, it was never meant to provide full audio support for games and other complex applications. It was designed to allow a single piece of media to be embedded in a page, similar to an img tag. There are a lot of issues with trying to use the it for games:
Timing slips are common with Audio elements
You need an Audio element for each instance of a sound
Load events aren't totally reliable, yet
No common volume controls, no fading, no filters/effects
Here are some good resources to get started with the Web Audio API:
MDN documentaion
Getting Started With WebAudio article
The FieldRunners WebAudio Case Study is also a good read
howler.js
For game authoring, one of the best solutions is to use a library which solves the many problems we face when writing code for the web, such as howler.js. howler.js abstracts the great (but low-level) Web Audio API into an easy to use framework. It will attempt to fall back to HTML5 Audio Element if Web Audio API is unavailable.
var sound = new Howl({
urls: ['sound.mp3', 'sound.ogg']
}).play();
// it also provides calls for spatial/3d audio effects (most browsers)
sound.pos3d(0.1,0.3,0.5);
wad.js
Another great library is wad.js, which is especially useful for producing synth audio, such as music and effects. For example:
var saw = new Wad({source : 'sawtooth'})
saw.play({
volume : 0.8,
wait : 0, // Time in seconds between calling play() and actually triggering the note.
loop : false, // This overrides the value for loop on the constructor, if it was set.
pitch : 'A4', // A4 is 440 hertz.
label : 'A', // A label that identifies this note.
env : {hold : 9001},
panning : [1, -1, 10],
filter : {frequency : 900},
delay : {delayTime : .8}
})
Sound for Games
Another library similar to Wad.js is "Sound for Games", it has more focus on effects production, while providing a similar set of functionality through a relatively distinct (and perhaps more concise feeling) API:
function shootSound() {
soundEffect(
1046.5, //frequency
0, //attack
0.3, //decay
"sawtooth", //waveform
1, //Volume
-0.8, //pan
0, //wait before playing
1200, //pitch bend amount
false, //reverse bend
0, //random pitch range
25, //dissonance
[0.2, 0.2, 2000], //echo array: [delay, feedback, filter]
undefined //reverb array: [duration, decay, reverse?]
);
}
Summary
Each of these libraries are worth a look, whether you need to play back a single sound file, or perhaps create your own html-based music editor, effects generator, or video game.
You may also want to use this to detect HTML 5 audio in some cases:
http://diveintohtml5.ep.io/everything.html
HTML 5 JS Detect function
function supportsAudio()
{
var a = document.createElement('audio');
return !!(a.canPlayType && a.canPlayType('audio/mpeg;').replace(/no/, ''));
}
Here's one method for making it possible to play even same sound simultaneously. Combine with preloader, and you're all set. This works with Firefox 17.0.1 at least, haven't tested it with anything else yet.
// collection of sounds that are playing
var playing={};
// collection of sounds
var sounds={step:"knock.ogg",throw:"swing.ogg"};
// function that is used to play sounds
function player(x)
{
var a,b;
b=new Date();
a=x+b.getTime();
playing[a]=new Audio(sounds[x]);
// with this we prevent playing-object from becoming a memory-monster:
playing[a].onended=function(){delete playing[a]};
playing[a].play();
}
Bind this to a keyboard key, and enjoy:
player("step");
To play the same sample multiple times, wouldn't it be possible to do something like this:
e.pause(); // Perhaps optional
e.currentTime = 0;
e.play();
(e is the audio element)
Perhaps I completely misunderstood your problem, do you want the sound effect to play multiple times at the same time? Then this is completely wrong.
Sounds like what you want is multi-channel sounds. Let's suppose you have 4 channels (like on really old 16-bit games), I haven't got round to playing with the HTML5 audio feature yet, but don't you just need 4 <audio> elements, and cycle which is used to play the next sound effect? Have you tried that? What happens? If it works: To play more sounds simultaneously, just add more <audio> elements.
I have done this before without the HTML5 <audio> element, using a little Flash object from http://flash-mp3-player.net/ - I wrote a music quiz (http://webdeavour.appspot.com/) and used it to play clips of music when the user clicked the button for the question. Initially I had one player per question, and it was possible to play them over the top of each other, so I changed it so there was only one player, which I pointed at different music clips.
Have a look at the jai (-> mirror) (javascript audio interface) site. From looking at their source, they appear to be calling play() repeatedly, and they mention that their library might be appropriate for use in HTML5-based games.
You can fire multiple audio events
simultaneously, which could be used
for creating Javascript games, or
having a voice speaking over some
background music
Here's an idea. Load all of your audio for a certain class of sounds into a single individual audio element where the src data is all of your samples in a contiguous audio file (probably want some silence between so you can catch and cut the samples with a timeout with less risk of bleeding to the next sample). Then, seek to the sample and play it when needed.
If you need more than one of these to play you can create an additional audio element with the same src so that it is cached. Now, you effectively have multiple "tracks". You can utilize groups of tracks with your favorite resource allocation scheme like Round Robin etc.
You could also specify other options like queuing sounds into a track to play when that resource becomes available or cutting a currently playing sample.
http://robert.ocallahan.org/2011/11/latency-of-html5-sounds.html
http://people.mozilla.org/~roc/audio-latency-repeating.html
Works OK in Firefox and Chrome for me.
To stop a sound that you started, do
var sound = document.getElementById("shot").cloneNode(true);
sound.play();
and later
sound.pause();
I would recommend using SoundJS, a library I've help develop. It allows you to write a single code base that works everywhere, with SoundJS picking web audio, html audio, or flash audio as appropriate.
It will allow you to do all of the thing you want:
Play and mix multiple sounds,
Play the same sample multiple times, possibly overlapping playbacks
Interrupt playback of a sample at any point
play WAV files containing (depending on browser support)
Hope that helps.
It's not possible to do multi-shot playing with a single <audio> element. You need to use multiple elements for this.
I ran into this while programming a musicbox card generator. Started with different libraries but everytime there was a glitch somehow. The lag on normal audio implementation was bad, no multiple plays... eventually ended up using lowlag library + soundmanager:
http://lowlag.alienbill.com/
and
http://www.schillmania.com/projects/soundmanager2/
You can check out the implementation here:
http://musicbox.grit.it/
I generated wav + ogg files for multi browser plays. This musicbox player works responsive on ipad, iphone, Nexus, mac, pc,... works for me.
var AudioContextFunc = window.AudioContext || window.webkitAudioContext;
var audioContext = new AudioContextFunc();
var player=new WebAudioFontPlayer();
var instrumVox,instrumApplause;
var drumClap,drumLowTom,drumHighTom,drumSnare,drumKick,drumCrash;
loadDrum(21,function(s){drumClap=s;});
loadDrum(30,function(s){drumLowTom=s;});
loadDrum(50,function(s){drumHighTom=s;});
loadDrum(15,function(s){drumSnare=s;});
loadDrum(5,function(s){drumKick=s;});
loadDrum(70,function(s){drumCrash=s;});
loadInstrument(594,function(s){instrumVox=s;});
loadInstrument(1367,function(s){instrumApplause=s;});
function loadDrum(n,callback){
var info=player.loader.drumInfo(n);
player.loader.startLoad(audioContext, info.url, info.variable);
player.loader.waitLoad(function () {callback(window[info.variable])});
}
function loadInstrument(n,callback){
var info=player.loader.instrumentInfo(n);
player.loader.startLoad(audioContext, info.url, info.variable);
player.loader.waitLoad(function () {callback(window[info.variable])});
}
function uhoh(){
var when=audioContext.currentTime;
var b=0.1;
player.queueWaveTable(audioContext, audioContext.destination, instrumVox, when+b*0, 60, b*1);
player.queueWaveTable(audioContext, audioContext.destination, instrumVox, when+b*3, 56, b*4);
}
function applause(){
player.queueWaveTable(audioContext, audioContext.destination, instrumApplause, audioContext.currentTime, 54, 3);
}
function badumtss(){
var when=audioContext.currentTime;
var b=0.11;
player.queueWaveTable(audioContext, audioContext.destination, drumSnare, when+b*0, drumSnare.zones[0].keyRangeLow, 3.5);
player.queueWaveTable(audioContext, audioContext.destination, drumLowTom, when+b*0, drumLowTom.zones[0].keyRangeLow, 3.5);
player.queueWaveTable(audioContext, audioContext.destination, drumSnare, when+b*1, drumSnare.zones[0].keyRangeLow, 3.5);
player.queueWaveTable(audioContext, audioContext.destination, drumHighTom, when+b*1, drumHighTom.zones[0].keyRangeLow, 3.5);
player.queueWaveTable(audioContext, audioContext.destination, drumSnare, when+b*3, drumSnare.zones[0].keyRangeLow, 3.5);
player.queueWaveTable(audioContext, audioContext.destination, drumKick, when+b*3, drumKick.zones[0].keyRangeLow, 3.5);
player.queueWaveTable(audioContext, audioContext.destination, drumCrash, when+b*3, drumCrash.zones[0].keyRangeLow, 3.5);
}
<script src='https://surikov.github.io/webaudiofont/npm/dist/WebAudioFontPlayer.js'></script>
<button onclick='badumtss();'>badumtss</button>
<button onclick='uhoh();'>uhoh</button>
<button onclick='applause();'>applause</button>
<br/><a href='https://github.com/surikov/webaudiofont'>More sounds</a>
I know this is a total hack but thought I should add this sample open source audio library I put on github awhile ago...
https://github.com/run-time/jThump
After clicking the link below, type on the home row keys to play a blues riff (also type multiple keys at the same time etc.)
Sample using jThump library >> http://davealger.com/apps/jthump/
It basically works by making invisible <iframe> elements that load a page that plays a sound onReady().
This is certainly not ideal but you could +1 this solution based on creativity alone (and the fact that it is open source and works in any browser that I've tried it on) I hope this gives someone else searching some ideas at least.
:)
You can always try AudioContext it has limited support but it's a part of the web audio api working draft. It might be worth it if you are planing to release something in the future. And if you are only programing for chrome and Firefox you're golden.
Web Audio API is right tool for this job. There is little bit of work involved in loading sounds files and playing it. Luckily there are plenty of libraries out there that simplify the job. Being interested in sounds I also created a library called musquito you can check out that as well.
Currently it supports only fading sound effect and I'm working on other things like 3D spatialization.
The selected answer will work in everything except IE. I wrote a tutorial on how to make it work cross browser. Here is the function I wrote:
function playSomeSounds(soundPath) {
var trident = !!navigator.userAgent.match(/Trident\/7.0/);
var net = !!navigator.userAgent.match(/.NET4.0E/);
var IE11 = trident && net
var IEold = (navigator.userAgent.match(/MSIE/i) ? true : false);
if (IE11 || IEold) {
document.all.sound.src = soundPath;
} else {
var snd = new Audio(soundPath); // buffers automatically when created
snd.play();
}
}
You also need to add the following tag to the html page:
<bgsound id="sound">
Finally you can call the function and simply pass through the path here:
playSomeSounds("sounds/welcome.wav");

Categories

Resources