How can one synthesize speech in a web app?
Is there a way using the HTML5 Web Speech API?
For example, if I wanted to synthesize the sentence 'A quick brown fox jumps over the lazy dog', how could I do that without playing a pre-recorded file of someone reading exactly that sentence
As it is right now, the SpeechSynthesis features which are part of the Web Speech API Specification have not been implemented in any browser yet.
However, you can have a look at this Chrome extension.
EDIT:
It seems that the lastest Chrome Canary build might include the feature, however it only specifies that the feature has been started (http://www.chromestatus.com/features) and I was unable to find any more substantive information about it.
EDIT2:
As mentionned in the comments by #cdf, it seems that you can now play around with that feature by launching chrome with the --enable-speech-synthesis flag. Please see this post.
EDIT3:
This appears to be in Webkit now, but not on iOS at the moment. Not even in Chrome on iOS. Demo #BrandonAaskov
That code works both on Chrome and on Safari (taken from my app ttsreader):
if (!('speechSynthesis' in window)) {
// Synthesis not supported.
alert('Speech Synthesis is not supported by your browser. Switch to Chrome or Safari');
}
var msg = new SpeechSynthesisUtterance('hello world');
msg.volume = 1; // 0 to 1
msg.rate = 0.9; // 0.1 to 10
msg.pitch = 1; //0 to 2
msg.lang = "en-GB";
msg.text = "I will speak this out";
speechSynthesis.speak(msg);
Related
I want to use navigator.vibrate on my page.
This is my code:
var canVibrate = "vibrate" in navigator || "mozVibrate" in navigator;
if (canVibrate && !("vibrate" in navigator))
{
navigator.vibrate = navigator.mozVibrate;
}
$(document).on('click', '.answer', function (eve) {
$this = $(this);
navigator.vibrate(222);
// some other code ...
This works on Android devices but on iOS (I tested on Firfox, Chrome and Safari on some iOS devices) the code will be broken at this line.
Why is that?
Apple's mobile web browser simply does not have support for it.
Firefox and Chrome for iOS are wrappers around Safari's rendering engine.
Quentin is correct that Apple devices do not support the API.
The given code fails to check for vibration support when actually calling the method. To avoid the vibrate function being caught undefined:
const canVibrate = window.navigator.vibrate
if (canVibrate) window.navigator.vibrate(100)
We don't want our app to break down and be unusable on iOS devices.
But we really want to use navigator.vibrate() on Android or wherever possible.
One thing you can do is you can create your own policy over browser policies. Ask "Can we make iOS devices ignore navigator.vibrate()"?
The answer is "Well, yes you can do that by using a user agent parser."
(Such as Faisal Salman's UAParser to detect if the user's device was an iOS or Mac OS device.)
In your code, wrap all the navigator.vibrate() calls inside conditions like,
if(nameOfUsersOS != "iOS" && nameOfUsersOS != "Mac OS") { navigator.vibrate(); }
Note: You must replace nameOfUsersOS with your own variable name.
Note: This is only one possible approach. Policy makers of Apple can and sometimes do change their minds. That means in the future they could allow the good Vibration API just like they allowed the Web Speech API recently. You must use the solution in kotavy's answer unless your policy is like "no vibration for Apple users forever".
I'm following this link where we can use the Speech recognition in the bot framework.
The default code is working with Option 2,
// // Option 2: Native browser speech (not supported by all browsers, no speech recognition priming support)
//
// Note that Chrome automatically blocks speech if the HTML file is loaded from disk. You can run a server locally
// or launch Chrome (close all the existing Chrome browsers) with the following option:
// chrome.exe --allow-file-access-from-files <sampleHtmlFile>
//
const speechOptions = {
speechRecognizer: new BotChat.Speech.BrowserSpeechRecognizer(),
speechSynthesizer: new BotChat.Speech.BrowserSpeechSynthesizer()
};
But when I tried to use cognitive services it's not working, meaning the mic is not going to listening mode.
This is the change I made,
// // Option 3: Cognitive Services speech recognition using API key (cross browser, speech priming support)
const speechOptions = {
speechRecognizer: new CognitiveServices.SpeechRecognizer({ subscriptionKey: 'YOUR_COGNITIVE_SPEECH_API_KEY' }),
speechSynthesizer: new CognitiveServices.SpeechSynthesizer({
gender: CognitiveServices.SynthesisGender.Female,
subscriptionKey: 'YOUR_COGNITIVE_SPEECH_API_KEY',
voiceName: 'Microsoft Server Speech Text to Speech Voice (en-US, JessaRUS)'
})
};
Apart from the commenting and uncommenting i didn't do any thing. But still the code is working only with Option 2
Pls help me solve this
After some deep digging from my colleague, we found the issue.
The original code is using the javascript from https://cdn.botframework.com/botframework-webchat/latest/CognitiveServices.js
<div id="BotChatGoesHere"></div>
<!-- If you do not want to use Cognitive Services library, comment out the following line -->
<script src="https://cdn.botframework.com/botframework-webchat/latest/CognitiveServices.js"></script>
If we open that JS file, you can find a line like below where it's using bing speech url
Storage.Local.GetOrAdd("Host","wss://speech.platform.bing.com")}
Since bing speech is depricted we have to update this line into our own subsciption
Storage.Local.GetOrAdd("Host","wss://<region>.stt.speech.microsoft.com")}
Once we updated it's working fine now
I am developing an application that requires me to use text to speech in the web browser. I am using the HTML5 Speech Synthesis for it. On Google Chrome the code runs fine, with all the available voices being listed using |getVoices()|, but in Firefox no voice is listed at all. I am testing my code on Firefox 56.0 (Ubuntu).
On searching over the internet, I did come across a StackOverflow answer that suggested that the getVoices() function should be called after the |onVoiceChanged| event
window.speechSynthesis.onvoiceschanged = function() {
window.speechSynthesis.getVoices();
...
};
I am invoking the call in the above mentioned manner and it works as desired in Chrome, but not on Firefox.
Another StackOverflow answer suggested that I enable the |media.webspeech.synth.enabled| in about:config of Firefox, but in my Firefox the preference |media.webspeech.synth.enabled| is already set to true.
I checked the MDN documentation https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis/getVoices and the example on this page does not run for me in Firefox, but runs fine in Chrome. I found that CanIUse.com lists that SpeechSynthesis as supported in Firefox 55 onwards, but it does not work for me.
Also The demo by Mozilla Developer Network to demonstrate the Speech Synthesis fails to work on my Firefox, but runs fine on Google Chrome. I have extensively searched online for a solution but could not find one. Can someone please point me in the right direction here.
Ran into the same issue and here's what I figured out.
Doesn't work for me on Firefox Ubuntu 16.04
On virtualbox Windows works. The voices come from Windows. "Microsoft David" is one of the choices.
Chrome works on Ubuntu, but only when it's online. It's not showing any traffic in the console, but the voice only works when it is online!
For anyone else still struggling with this, this is what fixed it for me. Assuming speech-dispatcher and espeak are already installed, the issue may be that there are multiple speech-dispatcher output modules installed and the default of them has no voices.
For example, on my system,
# spd-say -O lists output modules
$ spd-say -O
OUTPUT MODULES
espeak-ng-mbrola
espeak-ng
# spd-say -L lists all voices associated with the current
# output module
$ spd-say -L
NAME LANGUAGE VARIANT
$
Notice that spd-say -L outputs an empty table, even though spd-say "Hello world" works. I had no voices installed for the default speech-dispatcher module.
The second module in the list, espeak-ng, does have voices installed. Running spd-say -o espeak-ng -L produces a long table (-o selects a specific module).
Firefox seems to only query the default output module. This AskUbuntu post explains how to change the default output module.
Installing another output module, however, fixed the issue for me (I had trouble changing the default output module via /etc/speech-dispatcher/speechd.conf).
In short,
$ sudo apt install speech-dispatcher-pico
fixed the issue.
I'm running FireFox 58.0.2 (64-bit) under Windows 7 64 bit pro. The demo you mention does list one voice for me: Microsoft Anna - English (United States) (en-US). This voice is furnished by my Windows OS, not FireFox (Chrome lists 19 additional voices which are included with Chrome).
The reason your code works in Chrome but not Firefox is that Firefox doesn't invoke speechSynthesis.onvoiceschanged and Chrome does.
Why? Here is Mozilla's description of the voiceschanged event:
The voiceschanged event of the Web Speech API is fired when the list
of SpeechSynthesisVoice objects that would be returned by the
SpeechSynthesis.getVoices() method has changed (when the voiceschanged
event fires.)
Just a guess, but possibly the reason for the difference is that Chrome fires the event after it "adds" voices to your page. Firefox doesn't, so it doesn't.
The aforementioned demo gets around this incompatibility by calling populateVoiceList() right before the conditional code that triggers it (fourth line from bottom below, from here (CC0 licensed):
function populateVoiceList() {
voices = synth.getVoices();
var selectedIndex = voiceSelect.selectedIndex < 0 ? 0 : voiceSelect.selectedIndex;
voiceSelect.innerHTML = '';
for(i = 0; i < voices.length ; i++) {
var option = document.createElement('option');
option.textContent = voices[i].name + ' (' + voices[i].lang + ')';
if(voices[i].default) {
option.textContent += ' -- DEFAULT';
}
option.setAttribute('data-lang', voices[i].lang);
option.setAttribute('data-name', voices[i].name);
voiceSelect.appendChild(option);
}
voiceSelect.selectedIndex = selectedIndex;
}
populateVoiceList();
if (speechSynthesis.onvoiceschanged !== undefined) {
speechSynthesis.onvoiceschanged = populateVoiceList;
}
This is the approach I've adopted for my web application; otherwise my list of voices never gets populated by Firefox. This code also happens to address a similar problem with Safari; see voiceschanged event not fired in Safari.
The original bug seems to indicate you need speechd (speech-dispatcher) installed, see https://bugzilla.mozilla.org/show_bug.cgi?id=1003464.
I had similar issue and solve it by using setTimeout, this is probably not the perfect solution but it works for me, give it a try
window.speechSynthesis.onvoiceschanged = setTimeout(function() {
window.speechSynthesis.getVoices();
}, 1000);
We're using the webkitSpeechRecognition API in Chrome. Since this is a prototype application, we're quite happy to support only Chrome, so we detect support for the API by doing a window.hasOwnProperty('webkitSpeechRecognition') check (as suggested by Google). This happily fails in Firefox, but the new Opera (being webkit-based) reports it does have the property. And, indeed, all code runs as intended, except... none of the events are ever fired, no voice is ever recorded.
So, my question is: can I make it work somehow? Does it require some special permissions or settings?
Alternatively, is there a way (aside good old browser-sniffing) to detect proper, working support for the webkitSpeechRecognition?
Right now only google chrome have API to speech recognition by stream (they have google sppeech API).
If you will use https://www.google.com/intl/en/chrome/demos/speech.html on Opera it will tell you that you need Chrome25+ to do this.
Acording to http://caniuse.com/#feat=speech-recognition Opera webkit have support for this functionality but right now it is not working. Opera does not have any API service that would translate it on the fly. Right now there have only placeholders function in their browser, maybe in the future they will make it, right no it is not working.
* EDITED *
Example by google how to determinte if it working or not.
// checking by google
if (!('webkitSpeechRecognition' in window)) {
console.log('GOOGLE: not working on this browser');
} else {
console.log('GOOGLE: working');
}
//your way
if (window.hasOwnProperty('webkitSpeechRecognition')) {
console.log('YOUR: working');
} else {
console.log('YOUR: not working on this browser');
}
The following Google sample uses a timestamp to detect that Opera did not trigger the start event: https://www.google.com/intl/en/chrome/demos/speech.html
I'm using a simple implementation of the Web Speech API, specifically Speech Synthesis.
It crashes chrome after saying the speech.
The page is being hosted in Google Drive.
Why does it crash chrome to the Aw Snap page? And how can I prevent / stop it?
Here is the URL: https://googledrive.com/host/0BwJVaMrY8QdcT0pIdGRmcjB5NkE/index.html
Here is the javascript:
function speak(whatToSay){
if('speechSynthesis' in window){
var speech = new SpeechSynthesisUtterance(whatToSay);
window.speechSynthesis.speak(speech);
}
}
speak('Hello, World!');
I'm having the same issue, so have reported the bug using the url you posted.
You can track it here: https://code.google.com/p/chromium/issues/detail?id=360370