I'm using a simple implementation of the Web Speech API, specifically Speech Synthesis.
It crashes chrome after saying the speech.
The page is being hosted in Google Drive.
Why does it crash chrome to the Aw Snap page? And how can I prevent / stop it?
Here is the URL: https://googledrive.com/host/0BwJVaMrY8QdcT0pIdGRmcjB5NkE/index.html
Here is the javascript:
function speak(whatToSay){
if('speechSynthesis' in window){
var speech = new SpeechSynthesisUtterance(whatToSay);
window.speechSynthesis.speak(speech);
}
}
speak('Hello, World!');
I'm having the same issue, so have reported the bug using the url you posted.
You can track it here: https://code.google.com/p/chromium/issues/detail?id=360370
Related
I'm following this link where we can use the Speech recognition in the bot framework.
The default code is working with Option 2,
// // Option 2: Native browser speech (not supported by all browsers, no speech recognition priming support)
//
// Note that Chrome automatically blocks speech if the HTML file is loaded from disk. You can run a server locally
// or launch Chrome (close all the existing Chrome browsers) with the following option:
// chrome.exe --allow-file-access-from-files <sampleHtmlFile>
//
const speechOptions = {
speechRecognizer: new BotChat.Speech.BrowserSpeechRecognizer(),
speechSynthesizer: new BotChat.Speech.BrowserSpeechSynthesizer()
};
But when I tried to use cognitive services it's not working, meaning the mic is not going to listening mode.
This is the change I made,
// // Option 3: Cognitive Services speech recognition using API key (cross browser, speech priming support)
const speechOptions = {
speechRecognizer: new CognitiveServices.SpeechRecognizer({ subscriptionKey: 'YOUR_COGNITIVE_SPEECH_API_KEY' }),
speechSynthesizer: new CognitiveServices.SpeechSynthesizer({
gender: CognitiveServices.SynthesisGender.Female,
subscriptionKey: 'YOUR_COGNITIVE_SPEECH_API_KEY',
voiceName: 'Microsoft Server Speech Text to Speech Voice (en-US, JessaRUS)'
})
};
Apart from the commenting and uncommenting i didn't do any thing. But still the code is working only with Option 2
Pls help me solve this
After some deep digging from my colleague, we found the issue.
The original code is using the javascript from https://cdn.botframework.com/botframework-webchat/latest/CognitiveServices.js
<div id="BotChatGoesHere"></div>
<!-- If you do not want to use Cognitive Services library, comment out the following line -->
<script src="https://cdn.botframework.com/botframework-webchat/latest/CognitiveServices.js"></script>
If we open that JS file, you can find a line like below where it's using bing speech url
Storage.Local.GetOrAdd("Host","wss://speech.platform.bing.com")}
Since bing speech is depricted we have to update this line into our own subsciption
Storage.Local.GetOrAdd("Host","wss://<region>.stt.speech.microsoft.com")}
Once we updated it's working fine now
I have an error with the iAd browser that does not allow YouTube videos to play. Apple’s workaround is to implement a javascript call on all YouTube-link button taps that launch the link in the Safari web browser instead of the in-app browser using window.location in the JS.
Code implemented:
window.location = "http://…";
But still not launching in the Safari web browser.
Main Code:
this.onViewActivate = function (event) {
window.location = "https://www.youtube.com/watch?v=eM8Wpq-oD5o&index=3&list=PLbh4x6U6o9F6sLbGWaAtgdvEmceJLYgwO";
};
You code should be using this process:
document.location = 'http://...
because the iAd producer is within a UIWebView. this will prompt to user and let them know they are about to quit the application.
How can one synthesize speech in a web app?
Is there a way using the HTML5 Web Speech API?
For example, if I wanted to synthesize the sentence 'A quick brown fox jumps over the lazy dog', how could I do that without playing a pre-recorded file of someone reading exactly that sentence
As it is right now, the SpeechSynthesis features which are part of the Web Speech API Specification have not been implemented in any browser yet.
However, you can have a look at this Chrome extension.
EDIT:
It seems that the lastest Chrome Canary build might include the feature, however it only specifies that the feature has been started (http://www.chromestatus.com/features) and I was unable to find any more substantive information about it.
EDIT2:
As mentionned in the comments by #cdf, it seems that you can now play around with that feature by launching chrome with the --enable-speech-synthesis flag. Please see this post.
EDIT3:
This appears to be in Webkit now, but not on iOS at the moment. Not even in Chrome on iOS. Demo #BrandonAaskov
That code works both on Chrome and on Safari (taken from my app ttsreader):
if (!('speechSynthesis' in window)) {
// Synthesis not supported.
alert('Speech Synthesis is not supported by your browser. Switch to Chrome or Safari');
}
var msg = new SpeechSynthesisUtterance('hello world');
msg.volume = 1; // 0 to 1
msg.rate = 0.9; // 0.1 to 10
msg.pitch = 1; //0 to 2
msg.lang = "en-GB";
msg.text = "I will speak this out";
speechSynthesis.speak(msg);
I am trying to write a simple firefox mobile addon that talks with my server side code using Websocket.
I have my code working for Desktop Firefox Addon but I am having difficulty with one for Firefox mobile.
function connectToServer(aWindow) {
var ws = new MozWebSocket("ws://ipaddress:8887"); // LINE 20
// var ws = new WebSocket("ws://ipaddress:8887");
ws.onopen = function() {
showToastMsg(aWindow, 'Sending');
ws.send('data');
}
ws.onmessage = function (evt) {
showToastMsg(aWindow, 'Display')
};
ws.onclose = function() {
};
I have tried both MozWebSocket and WebSocket, but both of them gives me error similar to the following :
E/GeckoConsole(15569): [JavaScript Error: "ReferenceError: MozWebSocket is not defined" {file: "resource://gre/modules/XPIProvider.jsm -> jar:file:///data/data/org.mozilla.firefox/files/mozilla/sq4c77hi.default/extensions/view-source#mydomain.org.xpi!/bootstrap.js" line: 20}]
Anyone know what I need to import or do to be able to reference WebSocket?
I just want to send data back and forth from my Firefox Android addon with my server side code using websocket. Any suggestions?
I am just confused because I have this setup running on Firefox Desktop Addon with very similar code.
Any help would be very appreciated thank you!
Try next solution
var ws = new Services.appShell.hiddenDOMWindow.WebSocket("ws://ipaddress:8887");
Are you using the Add-on SDK? Which file is this code going into?
First off, Mozilla 'un-prefixed' MozWebsocket to Websocket some time ago:
https://www.evernote.com/shard/s1/sh/59230d89-52f6-4f23-81de-75ab88f38c22/f9f1c0c64959ee44bdc833707efe10a9
...however the Websocket api is only available in actually web documents. What I've doen in the past is I've used the page-worker api to load a document in the background and connect to a Websocket server from the worker page:
https://github.com/canuckistani/Jetpack-Websocket-Example
For more in the page-worker api:
https://addons.mozilla.org/en-US/developers/docs/sdk/latest/modules/sdk/page-worker.html
In the future we have plans to expose HTML5 apis more directly to add-on developers.
Doing this in chrome:
<input id='speech-this' type='text' speech />
Creates an input tag with a little mic. Clicking on the mic does voice recognition, like android phones search.
My question is: Is it possible to do this without the <input> field? I mean, the ideal thing would be a javascript object that does something like:
var what_i_said = chrome.Speech.listen();
Or something like that.
Thanks!
Opera supports http://www.w3.org/TR/xhtml+voice/ (see http://dev.opera.com/articles/voice/).
You could look at the WAMI toolkit. WAMI toolkit is an interesting project from MIT - http://wami.csail.mit.edu/. In their own words "WAMI: Web-Accessible Multimodal Applications. WAMI is a simple way to add speech recognition capabilities to any web page." WAMI gives you a java applet that can run in your web page to perform audio capture for speech recognition.
There is actually way to do this with JavaScript and it's done with the Web Speech API. This allows you to quickly do voice recognition as well as speech synthesis.
Simplest example of Speech Synthesis:
var utterance = new SpeechSynthesisUtterance('Hello World');
window.speechSynthesis.speak(utterance);
Simplest example of Voice Recognition:
var recognition = new webkitSpeechRecognition();
recognition.onresult = function(event) {
console.log(event);
}
recognition.start();