I am trying to make this code work and don't know why is it not working locally. I tried the same on CodePen.io and it works.
<html>
<head>
<title>Voice API</title>
</head>
<body>
<button onClick="func()">Click Me</button>
<script>
function func()
{
alert('Hello');
var recognition = new webkitSpeechRecognition();
recognition.continuous = true;
recognition.interimResults = true;
recognition.onresult = function(event)
{
alert(event.results[0][0].transcript);
}
recognition.start();
}
</script>
</body>
Any suggestions?
You could try adding the following snippet to see what error is being generated.
recognition.onerror = function(event) {
console.log(event.error);
};
Chances are its spitting out a 'not-allowed' which generally means that the user agent is not allowing any speech input to occur for reasons of security, privacy or user preference (as you're running it locally through a file:// )
Have you tried serving the page under a local Web Server such as (IIS or Node) ?
Detailed discussion why camera (and microphone are not working on localhost here):
How to allow Chrome to access my camera on localhost?
In short, it is explicitly blocked.
Related
I'm trying to get Web NFC to work through the Web NFC API, but I can't get it past an error message of NotAllowedError: NFC permission request denied.
I'm using this on Chrome 89 Dev on a Windows 10 computer, and the source code is being run locally.
I have tried the examples posted on the Internet also, including the Google sample but it returns the same error. I'm not concerned with it being experimental at this point as referring to this does show it has successfully passed the necessary tests, including permissions.
The HTML/JS code I'm using is below, and I've read the specification point 9.3, but I can't make sense of it to write it as code, so is there a guideline algorithm that would be helpful here to resolve this?
async function readTag() {
if ("NDEFReader" in window) {
const reader = new NDEFReader();
try {
await reader.scan();
reader.onreading = event => {
const decoder = new TextDecoder();
for (const record of event.message.records) {
consoleLog("Record type: " + record.recordType);
consoleLog("MIME type: " + record.mediaType);
consoleLog("=== data ===\n" + decoder.decode(record.data));
}
}
} catch(error) {
consoleLog(error);
}
} else {
consoleLog("Web NFC is not supported.");
}
}
async function writeTag() {
if ("NDEFWriter" in window) {
const writer = new NDEFWriter();
try {
await writer.write("helloworld");
consoleLog("NDEF message written!");
} catch(error) {
consoleLog(error);
}
} else {
consoleLog("Web NFC is not supported.");
}
}
function consoleLog(data) {
var logElement = document.getElementById('log');
logElement.innerHTML += data + '\n';
};
<!DOCTYPE html>
<html>
<head>
<script src="webnfc.js"></script>
</head>
<body>
<p>
<button onclick="readTag()">Test NFC Read</button>
<button onclick="writeTag()">Test NFC Write</button>
</p>
<pre id="log"></pre>
</body>
</html>
From https://web.dev/nfc/#security-and-permissions
Web NFC is only available to top-level frames and secure browsing contexts (HTTPS only). Origins must first request the "nfc" permission while handling a user gesture (e.g a button click). The NDEFReader scan() and write() methods trigger a user prompt, if access was not previously granted.
I guess you are running from a file:// URL as you said "locally" which is not supported.
You need to host it from a local web server using a https:// URL
Once in the right scope trying to scan or write should trigger a user prompt.
You can also check permissions see https://web.dev/nfc/#check-for-permission
Update:
So I tried the sample page https://googlechrome.github.io/samples/web-nfc/
And this works for me on Android Chrome 87 with "Experimental Web Platform features" enabled
When you hit the scan button A dialog asking for permission pops up.
Comparing the code in this sample to yours I notice that does:-
ndef.addEventListener("reading" , ({ message, serialNumber }) => { ...
Where as yours does:-
ndef.onreading = event => { ...
I don't know if it is the style setting what happens on the Event or something else (Hey this is all experimental)
Update2
To answer the question from the comments of Desktop support.
So you should be some of the desktop/browser combinations at the moment and may be in the future there will be wider support as this is no longer experimental standards. Obviously as your test link suggest Chrome on a Linux Desktop should work as this is really similar to Android Support, with all the NFC device handling done by libnfc and the browser just has to know about this library instead of every type usb or other device than can do NFC.
From what seen of NFC support on Windows, most of this is focussed on direct controlling the NFC reader via USB as just another USB device, while there is a libnfc equivalent in Windows.Networking.Proximity API's I've not come across any NFC reader saying they support this or anybody using it.
For Mac Deskstop, given that Apple are behind the curve with NFC support in iOS, I feel their desktop support will be even further behind even though it could be similar to Linux.
As you can read at https://web.dev/nfc/#browser-support, Web NFC only supports Android for now which is why you get "NotAllowedError: NFC permission request denied." error on Windows.
Im trying to read NFC tags from chrome 81 on andriod with the following code:
<html>
<head>
<title>NFC</title>
</head>
<body>
<button onclick="reader()">Scan</button>
<script>
function reader(){
const reader = new NDEFReader();
reader.scan().then(() => {
alert("Scan started successfully.");
reader.onerror = () => {
alert("Cannot read data from the NFC tag. Try another one?");
};
reader.onreading = event => {
alert("NDEF message read.");
};
}).catch(error => {
alert(`Error! Scan failed to start: ${error}.`);
});
}
</script>
</body>
the problem im having with it is that it reads the entry from the nfc tag but doesnt give alerts like the code suggests, instead it trys to direct me to installed apps on my phone. However, when i use https://googlechrome.github.io/samples/web-nfc/ that is using the full API it works and displays in the webpage as data. The main difference is that im using the Enabling via chrome://flags method to allow the NFC API.
out of reading the tag, my only aim is to save the content to sessionStorage as a variable to be used by other parts of the website.
Thanks in advance
One difference between https://googlechrome.github.io/samples/web-nfc/ and your code that would matter is the fact this demo used to have an origin trial token in its web page.
For now, to experiment with Web NFC on Android, enable the #experimental-web-platform-features flag in chrome://flags as described in https://web.dev/nfc/#use
Hopefully this flag won't be required once it is shipped to the web platform.
I have this problem where in firefox the speech gets cut off if the page is auto-refreshed, but in google chrome it finishes saying the speech even if the page is auto-refreshed. How do I fix it so that the speech doesn't get cut off in firefox even when the page is auto-refreshed?
msg = new SpeechSynthesisUtterance("please finish saying this entire sentence.");
window.speechSynthesis.speak(msg);
(function ($) {
'use strict';
if (window == window.top) {
var body = $('body').empty();
var myframe = $('<iframe>')
.attr({ src: location.href })
.css({ height: '95vh', width: '100%' })
.appendTo(body)
.on('load', function () {
var interval;
interval = 750;
setTimeout(function () {
myframe.attr({ src: location.href });
}, interval);
});
}
})(jQuery);
I have this problem where in firefox the speech gets cut off if the
page is auto-refreshed, but in google chrome it finishes saying the
speech even if the page is auto-refreshed.
The described behaviour for Firefox is a sane expected implementation.
Browsing the source code of Firefox and Chromium the implementation of speechSynthesis.speak() is based on a socket connection with the local speech server. That server at *nix is usually speech-dispatcher or speechd (speech-dispatcher). See How to programmatically send a unix socket command to a system server autospawned by browser or convert JavaScript to C++ souce code for Chromium? for description of trying to implement SSML parsing at Chromium.
Eventually decided to write own code to achieve that requirement using JavaScript according to the W3C specification SpeechSynthesisSSMLParser after asking more than one question at SE sites, filing issues and bugs and posting on W3C mailings lists without any evidence that SSML parsing would ever be included as part of the Web Speech API.
Once that connection is initiated a queue is created for calls to .speak(). Even when the connection is closed Task Manager might still show the active process registered by the service.
The process at Chromium/Chrome is not without bugs, the closest that have filed to what is being described at the question is
Issue 797624: "speak speak slash" is audio output of .speak() following two calls to .speak(), .pause() and .resume()
Why hasn't Issue 88072 and Issue 795371 been answered? Are Internals>SpeechSynthesis and Blink>Speech dead? (for possible reason why "but in google chrome it finishes saying the speech even if the page is auto-refreshed." is still possible at Chrome)
.volume property issues
Issue 797512: Setting SpeechSynthesisUtterance.volume does not change volume of audio output of speechSynthesis.speak() (Chromium/Chrome)
Bug 1426978 Setting SpeechSynthesisUtterance.volume does not change volume of audio output of speechSynthesis.speak() (Firefox)
The most egregious issue being Chromium/Chrome webkitSpeechReconition implementation which records the users' audio and posts that audio data to a remote service, where a transcript is returned to the browser - without explicitly notifying the user that is taking place, marked WONT FIX
Issue 816095: Does webkitSpeechRecognition send recorded audio to a remote web service by default?
Relevant W3C Speech API issues at GitHub
The UA should be able to disallow speak() from autoplaying #27
Precisely define when speak() should fail due to autoplay rules #35 (ironically, relevant to the reported behaviour at Chromium/Chrome and output described at this question, see Web Audio, Autoplay Policy and Games and Autoplay Policy Changes)
Intent to Deprecate: speechSynthesis.speak without user activation
Summary
The SpeechSynthesis API is actively being abused on the web. We don’t have hard data on abuse, but since other autoplay avenues are
starting to be closed, abuse is anecdotally moving to the Web Speech
API, which doesn't follow autoplay rules.
After deprecation, the plan is to cause speechSynthesis.speak to
immediately fire an error if specific autoplay rules are not
satisfied. This will align it with other audio APIs in Chrome.
Timing of SpeechSynthesis state changes not defined #39
Timing of SpeechSynthesisUtterance events firing not defined #40
Clarify what happens if two windows try to speak #47
In summary, would not describe the behaviour at Firefox as a "problem", but the behaviour at Chrome as being a potential "problem".
Diving in to W3C Web Speech API implementation at browsers is not a trivial task. For several reasons. Including the apparent focus, or available option of, commercial TTS/SST services and proprietary, closed-source implementations of speech synthesis and speech recognition in "smart phones"; in lieu of fixing the various issues with the actual deployment of the W3C Web Speech API at modern browsers.
The maintainers of speechd (speech-dispatcher) are very helpful with regards to the server side (local speech-dispatcher socket).
Cannot speak for Firefox maintainers. Would estimate it is unlikely that if a bug is filed relevant to the feature request of continuing execution of audio output by .speak() from reloaded window is consistent with recent autoplay policies implemented by browsers. Though you can still file a Firefox bug to ask if audio output (from any API or interface) is expected to continue during reload of the current window; and if there are any preferences or policies which can be set to override the described behaviour, as suggested at the answer by #zip. And get the answer from the implementers themselves.
There are individuals and groups that compose FOSS code which are active in the domain and willing to help SST/TTS development, many of which are active at GitHub, which is another option to ask questions about how to implement what you are trying to achieve specifically at Firefox browser.
Outside of asking implementers for the feature request, you can read the source code and try create one or more workarounds. Alternatives include using meSpeak.js, though that still does not necessarily address if Firefox is intentionally blocking audio output during reload of the window.
Not sure why there's a difference in behavior... guest271314 might be on to something in his answer. However, you may be able to prevent FF from stopping the tts by intercepting the reload event with a onbeforeunload handler and waiting for the utterance to finish:
msg = new SpeechSynthesisUtterance("say something");
window.speechSynthesis.speak(msg);
window.onbeforeunload = function(e) {
if(window.speechSynthesis.speaking){
event.preventDefault();
msg.addEventListener('end', function(event) {
//logic to continue unload here
});
}
};
EDITED: See more elegant solution with promises below initial answer!
Below snippet is a workaround to the browser inconsistencies found in Firefox, checking synth.speaking in the interval and only triggering a reload if it's false prevents the synth from cutting of prematurely:
(It does not NOT work properly in the SO snippet, I assume it doesn't like iFrames in iFrames or whatever, just copy paste the code in a file and open it with Firefox!)
<p>I'm in the body, but will be in an iFrame</p>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script>
var synth = window.speechSynthesis;
msg = new SpeechSynthesisUtterance("please finish saying this entire sentence.");
synth.speak(msg);
(function ($) {
'use strict';
if (window == window.top) {
var body = $('body').empty();
var myframe = $('<iframe>')
.attr({ src: location.href })
.css({ height: '95vh', width: '100%' })
.appendTo(body)
.on('load', function () {
var interval;
interval = setInterval(function () {
if (!synth.speaking) {
myframe.attr({ src: location.href });
clearInterval(interval);
}
}, 750);
});
}
})(jQuery);
</script>
A more elaborate solution could be to not have any setTimeout() or setInterval() at all, but use promises instead. Like this the page will simply reload whenever the message is done synthesizing, no matter how short or long it is. This will also prevent the "double"/overlapping-speech on the initial pageload. Not sure if this helps in your scenario, but here you go:
<button id="toggleSpeech">Stop Speaking!</button>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script>
if (window == window.top) {
window.speech = {
say: function(msg) {
return new Promise(function(resolve, reject) {
if (!SpeechSynthesisUtterance) {
reject('Web Speech API is not supported');
}
var utterance = new SpeechSynthesisUtterance(msg);
utterance.addEventListener('end', function() {
resolve();
});
utterance.addEventListener('error', function(event) {
reject('An error has occurred while speaking: ' + event.error);
});
window.speechSynthesis.speak(utterance);
});
},
speak: true,
};
}
(function($) {
'use strict';
if (window == window.top) {
var body = $('body').empty();
var myframe = $('<iframe>')
.attr({ src: location.href })
.css({ height: '95vh', width: '100%' })
.appendTo(body)
.on('load', function () {
var $iframe = $(this).contents();
$iframe.find('#toggleSpeech').on('click', function(e) {
console.log('speaking will stop when the last sentence is done...');
window.speech.speak = !window.speech.speak;
});
window.speech.say('please finish saying this entire sentence.')
.then(function() {
if ( window.speech.speak ) {
console.log('speaking done, reloading iframe!');
myframe.attr({ src: location.href });
}
});
});
}
})(jQuery);
</script>
NOTE: Chrome (since v70) does NOT allow the immediate calling of window.speechSynthesis.speak(new SpeechSynthesisUtterance(msg)) anymore, you will get an error speechSynthesis.speak() without user activation is no longer allowed..., more details here. So technically the user would have to activate the script in Chrome to make it work!
Firefox:
First of all type and search for the “about: config” inside the browser by filling it in the address bar. This will take to another page where there will be a pop up asking to Take Any Risk, you need to accept that. Look for the preference named “accessibility.blockautorefresh” from the list and then right-click over that. There will be some options appearing as the list on the screen, select the Toggle option and then set it to True rather than False. This change will block the Auto Refresh on the Firefox browser. Remember that this option is revertable!
I have a legacy application which is making use of ActiveX control for serial communication. This is rightly communicating in IE.
Since ActiveX is IE specific technology obviously it will not run in Chrome or other browser
So I don't want to disturb the IE functionality and thought of introducing browser specific function as below.
$(document).ready(function(e){
var isIEBrowserFlag = true;
if(isIEBrowserFlag)
{
var obj = new ActiveXObject("MSCommLib.MSComm");
obj.CommPort = commPort;
obj.RThreshold = thresHold;
obj.Settings = settings;
obj.PortOpen = true;
obj.DTREnable = true;
obj.Output = "test";
obj.PortOpen = false;
//other stuff
}
else
{
//chrome
}
//sendBagToPrinter(obj);
});
On googling I came to know about jQuery.parseXML() but how do I implement the same functionality as of IE in Chrome using $.parseXML()
Similar other plugins such as juart were tried, but not fitting my requirement.
chrome.serial API provides access to client serial ports, but from a Chrome App.
My suggestion, for future support, use/write some kind of server for serial device and connect from browser to server via a json type communication. See https://github.com/johnlauer/serial-port-json-server
I have written a very basic web worker, but is not working. Any help would be appreciated. Thanks.
Here's my HTML code:
<!DOCTYPE html>
<html>
<head>
<title>Basic Demo of Web Workers</title>
</head>
<body>
<button type="button" onclick="start()">Start!</button>
<button type="button" onclick="stop()">Stop!</button>
<output id="counterShow"></output>
</body>
<script>
var myWorker;
function start() {
if(window.Worker) {
myWorker = new Worker("http://yourjavascript.com/8257018521/basic-demo.js");
myWorker.onmessage = function(event) {
document.getElementById('counterShow').innerHTML = event.data;
};
myWorker.onerror = function(event) {
alert(event.message, event);
}
} else {
document.getElementsByTagName('BODY')[0].innerHTML = 'Sorry! Web workers are not supported.';
}
}
function stop() {
myWorker.terminate();
}
</script>
Here's the JS file that is hosted on a CDN (yourjavascript.com)
for(var i=0; i<100000; i++) {
postMessage(i);
}
The web worker is silently failing. Please help.
Uncaught DOMException: Failed to construct 'Worker': Script at 'http://yourjavascript.com/8257018521/basic-demo.js' cannot be accessed from origin 'null'
I believe this is due to security reason.
More info: Cross Domain Web Workers
100% a CORS issue. You cannot load web workers in the same way as normal scripts in <script> tags. Only if remote server allows alien host origins to load their files via ajax. Not sure why, but that's the way it is. See this workaround:
https://stackoverflow.com/a/33432215/607407
Well, I fixed it myself. I started a new project on CodePen and ran the code there, and it worked out pretty sweetly. I think the key requirement for Web Workers to run is to run them online, not locally.