I am trying to use the ACR122U NFC Reader with the WebHID API but I cannot seem to get it to be found by Chrome 91 Desktop on both Linux nor Windows.
I know the browser is seeing the reader from chrome://usb-internals showing me this screen:
The two pieces of code that I have tried so far is here seen through the Inspect Element tools in the script tag.
The VendorID and ProductID in the scripts match that which Chrome can recognise it through, so I'm not sure why it is not working for this.
The only popup dialog that I have gotten the reader to appear within is the Chrome NFC WebUSB but I cannot use that API as it implements a protected class, which is why I am using WebHID as the alternative in hopes it may work.
How do I correct my code to allow Chrome to recognise this device; what am I missing?
I was able to control the ACR122U NFC Reader through WebUSB. See the library I've updated at https://github.com/beaufortfrancois/chrome-nfc
What is not working for you?
I think this won't work because ARC122U NFC Reader doesn't implement the HID protocol. According to the product page, it uses the USB CCID protocol.
Related
Now I am using selenium + ChromeDriver to do webpage automation testing. This webpage includes one javascript using Web Speech API. The specified text can't be played under chrome driver control on Linux. I found the following difference w/o chromedirver.
If I start the chrome browser manually, and type the window.speechSynthesis.getVoices() on console, it can display one supported SpeechSynthesisVoice list. so the specified text can play with given voice.
But if I start the chrome browser by selenium + chrome driver, window.speechSynthesis.getVoices() on web console will show me nothing. So the specified text can't be played.
I tried to search this on google but unlucky, nothing related finding. Does anybody have the similar issue ?
Thanks.
Seems nobody had the same issue but I found the solution under the help of chrome driver team.
I'd like to share the link solution https://groups.google.com/forum/#!topic/chromedriver-users/-ssKEYKu-dA
is there a client based javascript way to detect Eddystone-URL beacons directly from the Chrome browser in iOS?
I know Chrome has the widget for the today view wich works fine, but I need to detect new Eddystones without pulling down the notifcations window.
Say a user clicks on a link provided by the widget, gets redirected to the Chrome app, does stuff, walks around and gets in range from another beacon.
Right now he would have to pull down the tab again to receive the new URI. But I need some sort of notification from within the Browser.
I hope you get the idea.
Thanks in advance!
Cheers
p.
Unfortunately, this is not possible. Understand that Chrome for iOS is just a thin app around the standard native iOS UIWebView, so there is nothing you can do in JavaScript that you cannot do in Safari. And Apple does not implemented any JavaScript bindings to the CoreBluetooth APIs that would be needed to detect Eddystone-URL beacons. The bottleneck is more of an iOS restriction than a Chrome browser one.
Note that this is not true for the Chrome browser on other platforms, notably ChromeOS, which does provide such JavaScript APIs.
I am trying to send a message from a html page to an embedded PDF by using hostContainer.postMessage API from Acrobat Javascript API.
This works in IE9 but it does not work in Chrome and Firefox.
I have tried by disabling their own PDF viewer and enabling acrobat reader and still it does not work.
Does anyone face this issue?
Thanks
hostContainer.postMessage posts from active script (running in adobe reader activeX) to the the javascript running in browser. you saying you want it the other way around...
however if you want to post from within the active script running in pdf reader activex to the javascript runing in browser here's two sources that could help you
Get the current page in PDF Java Web
pdf object.messagehandler onMessage not working in IE
personally I manage to make it work in IE but not in Chrome nor in FF.
We're using node-webkit for packaging an app made with HTML5 and js. Everything has been working well but now when we try to read the content using a screen reader (Apple VoiceOver or Jaws), the content seems inaccessible.
The screen reader is able to read the window's buttons and the window's title but can't read the HTML inside the app. Actually it doesn't work with the sample page that came out of the box with node-webkit so is not a problem of our app.
Any ideas or alternatives? Thanks!
Each platform (OS) like Windows, OS X and Linux(es) (and Android and iOS and countless others on mobile) has its own accessibility API. For example Windows Automation API on Windows 7 and 8 (open source softwares use IAccessible2 but it seems to be a useful extra layer to it. Whatever)
A software like a browser must communicate with the OS what it is doing via this API (it could be an email client, a spreadsheet, a file explorer, etc).
The OS will filter (examples: if it's not the active window, if a system thing happens like removal of a USB key or new notification)
This accessibility API will then inform assistive technologies (AT) like a screen reader (SR) of what's happening. A SR being a complex software with user configuration, it'll also filter, adapt, output via a speech synthesis and/or a Braille display, etc from what it was fed.
Though I'm only accustomed to web accessibility and not in anything related to software, browsers, APIs and their internal working (so I could be very wrong, sorry) I guess the communication related to accessibility from "WebKit" to the OS (and there's "WebKit" on Windows, OS X, maybe still Linux, etc) is managed by Chrome the software (and vanilla Chromium the software from Chromium the project), NOT WebKit the rendering engine. node-webkit is made around Chromium but does it pass along the messages related to accessibility API? If it does or can, you should have the same accessibility as in Chromium (good luck with that, compared to Firefox and IE). If it doesn't, that's a black box.
One would then need to add all this accessibility API management to make it work! Maybe it's just an option in node-webkit?
If you want to verify if anything goes out of a software related to an accessibility API, you can test with aViewer from The Paciello Group.
May be related: Blink accessibility (Chromium project)
Visiting www.google.com on the Android browser (or even with an android spoofed user-agent), presents the option to "Share Location". When clicked, it uses the GPS/Cell phone towers to figure out the location. I tried the google.loader.clientLocation but that only works using the IP address.
Is there a method to tap into the Android OS and access GPS data from a regular web application (and not an Android application) similar to the way Google does?
[Perhaps Google uses the Google Gears app on Android to access this data.]
Thanks!
This is an HTML5 API, and it'll work on Webkit-derivatives, Chrome, and Firefox 3.5 (for now).
http://dev.w3.org/geo/api/spec-source.html
http://merged.ca/iphone/html5-geolocation
I was trying that recently and found this forum posting interesting. I did not find a real good way to do this either and it doesn't look like we can do that without writing our own app which opens a browser instance.
http://androidforums.com/support/8868-how-get-gps-coordinates-browser.html
Here is also a nice example: http://klauskjeldsen.dk/w3c-geolocation-api-html5/