getUserMedia lock focus/exposure - javascript

I am using navigator.getUserMedia with constraints to access the user's webcam, using the feed as the source of an HTML <video> and then copying its stream to drawImage a <canvas> context. I'm doing all this so I can take a snapshot at intervals.
What I would like to do is, once the page starts taking snapshots, lock the getUserMedia camera's focus/exposure, so that in between snapshot intervals the environment can change without the light balance changing or the camera refocusing.
Does anyone know if this is possible on the JS side?

Flagged as duplicate of: Take photo when the camera is automatically focused
Firsly - though perhaps bad practice here - I will link to why MediaCapture might be blurry or grainy than expected:
Why the difference in native camera resolution -vs- getUserMedia on iPad / iOS?
In short: MediaCapture does a lot of transformations on the media source which may cause blury or grainy images.
To solve this use ImageCapture:
The ImageCapture API enables control over camera features such as
zoom, brightness, contrast, ISO and white balance. Best of all, Image
Capture allows you to access the full resolution capabilities of any
available device camera or webcam. Previous techniques for taking
photos on the Web have used video snapshots, which are lower
resolution than that available for still images.
To solve your problem:
You can solve this via UX and a zoom slider. Below is information on how to achieve this with ImageCapture (still images). MediaCapture (video feed) does not allow this functionality. You could use MediaCapture and have a button such as "Manual Mode" and allow a user to pick the correct zoom to take the photo.
You could also "emulate" a camera by having an update loop doing n ImageCaptures per update and have a zoom slider.
https://developers.google.com/web/updates/2016/12/imagecapture
And here is an example on how to use it/polyfill: https://github.com/GoogleChromeLabs/imagecapture-polyfill
Makesure you use the latest getUserMedia polyfills which handle crossplatform support: https://www.npmjs.com/package/webrtc-adapter
Hope this helps

We had the same problem some time ago. It only happens on some devices... but even devices from the same brand had different behaviours. Something in the browser / OS version / driver combination were broken. In some devices focus was locked and in others didn't. We look over all the API, we tried dozens of variations in the initialization code but finally we concluded there was no apparent solution.
We added a button to restart all in order to mitigate the problem somehow...

Related

How to view 3d html5/css3/native javascript page stereoscopically on mobile?

Short version: Kieth Clark has a 3D html fps shooter demo. It uses 3d tansforms on html5 elements to produce a 3D world experience. It is not VR. Is there an API to view it stereoscopically?
I have a similar engine. I came up with a way to view it stereoscopically using Cardboard-style viewer, 3D TV/Monitor, or red/cyan anaglyph glasses. I had to use a pair of iframes, however, and load a copy of the "world" document into each frame, however.
This doubles the load on the gpu, however, requires duplicates of all changes to "the world" for both iframes, and workarounds for focusable items such as textareas. This all works great, but diminishes the capacity for detail without RAF noticably slowing down and getting jumpy. Especially true in Firefox on mobile, and of course there is also the added problem of security limitations on iframes.
If theres an API to just view and control a 3D html5 page in stereo without explicitly duplicating everything that would make things a lot simpler and more efficient.
I'm using a Google Chrome on Galaxy Note 3 as my standard-level target-device, if anyone needs to know.
Long version (old):
I have a 3d game I'm writing with native html5/css3/javascript. It is primarily for mobile and already contains a fully-functional camera system with the ability to zoom in or out of first-person, second-person, overhead etc, rotate the yaw and pitch of the view, as well as location on the map around the avatar, etc. Is there an easy way to view it stereoscopically? It will be embedded in an Android app, or at least accessed through one, or through Chrome as a web-app. I thought Chrome Dev with VR Shell would be a possibility to try it out and hopefully integrate into an app eventually. Not having luck with that yet. Theoretically, I just need to be able to view an ordinary html5 page that has css3 3d transforms. For example, if you had a 3d cube made of divs or whatever, to view it with two points of view, one for each eye, without changing anything in the page itself. Basically, if you could view anything 3d in the page stereoscopically, much like the VR Shell sounds like, it should work. All I seem to come up with is how to turn on the flag in Chrome Dev, but I'm not seeing anything to actually activate it. It's been fully restarted etc. The page is already 3d and fully-functional with orientation control in first-person or otherwise. All I seem to find are how to turn it on or about 3d videos. Can this be done in Googles VR libs for Studio without using all the other stuff? I just need the second eyeball.
Ok. Was hoping an api existed for this, but my solution was to make a parent document with two iFrames and load the doc with the 3d tranforms into both iframes. Then offset the perspective-origin in each about 1% (i.e. 49% in one and 51% in the other). Worked great without added mods using device orientation, but obvi not for mouse control. Ideally, both iframed documents should be controlled from js in the parent doc. Downside is you have to control two objects for every change, and complications arise if you have inputs or textareas that take exclusive focus. Fixed all that, but this is the down-low version of the solution.

Recognise other devices positions

I am currently working on my thesis project, where I am building a Javascript/node library making it easier for developers to merge the canvas in the browser from multiple devices together. So that objects can live within a large canvas of all the devices together. Basically, the idea is that you'll be able to put multiple phones/pads next to each other in different positions relative to each other, but use all their browsers as just one canvas.
I will also create another library extension with a bunch of restrictions to it, and hold a hackathon to see what developers creates with this tool and within these restrictions.
Anyway, I have ran into a problem. To make the tool more versatile and flexible I optimally want every device to be able to detect where in space the other devices are in relation to itself. But I have ran out of ideas about how to solve it, do you guys have any ideas? Do you think it is possible? Or will I have to come up with a manual solution? Can any other technology help? Bluetooth?
I have looked at projects like:
Google Chrome Racer (https:/ /www.chrome.com/racer)
Coca-Cola Penguin Curling (http:/ /cargocollective.com/rafaeldante/Coca-Cola-Penguin-Curling)
How do you think these projects solved the issue of positioning order? Which device is where in the order?
Sadly, Chrome Racer doesn't seem to be running anymore. But as far as I can remember playing it a while ago, you did not have to put in the position of your device manually? Analyzing this clip(https://youtu.be/17P67Uz0kcw?t=4m46s), it looks like the application understands where in line that specific device is, right? Any ideas on this?
Just a random musing on possible paths to a solution.
They all have cameras that are facing up. If any two can capture an image that overlaps you have way of orienting relative to each other. If every device had a view that overlapped with at least one other then you can get a reasonable approximation of the relative orientation and positions of them all. The more devices the better the result.
You can listen to the ambient sound environment and use arrival time of sounds to give another relative positional clue. Devices can also emit both sound and light, if done with pre determined order, the sound can produce relative position. The display if flashed on and off in specific patterns could also be detected ( not directly but as subtle ambient reflection.)
Ambient light levels are also a source of relative position and orientation.
If only two devices tried these methods they would fail most of the time. But with each extra device you get its relative information compared to all the others thus growing the data exponentially making a solution easier to find. Given enough devices the solution of position and orientation may be possible via passive sensing only.

Multi-channel audio support in the browser on iOS and Android

I found this link to a page here on StackOverflow about "Creating Audio using Javascript in <audio>", and this page on how to play audio on multiple channels. I found that the iPhone supports the audio tag and the Audio object in Javascript to play single channel audio, but is there a way to play audio on multiple channels?
Maybe I'm over complicating this, so this is what I'm trying to do. I want a way to make a graceful audio player in Javascript that supports transitioning from one audio file to another. The way I was going to implement this is to incrementally reduce the volume on one channel while incrementally increasing the volume on the other channel so I'd get a kind of fade effect. Is there a simpler solution to this using only Javascript? I guess another solution would be to reduce the volume to a certain point, start the new audio file on the same channel, then increase the volume again. This circumvents the need for fading, but I would like to fade if at all possible.
Is this possible? I know the HTML5 spec isn't finished yet, but is there some kind of workaround that you know of? Do any of you have ideas for another approach?
From what I can tell from this post about playing audio in the Android browser, this isn't supported yet, but do any of you know if it will support multiple channel audio once the audio tag is supported? Does opera mini support this?
This is an old question I know :).
iOS Safari does not support multiple audio objects playing at the same time. Also, it is not possible for having a fade-in/out effect for iOS, as the only way to change the volume setting is from the hardware itself. Apple decided to give this ability only to the device user. Volume setting is not writable by javascript. It is not even readable (always returns 1).
You can check out the Safari documentation for iOS for more info.
For Android, to be honest I have no idea.
There's no direct way that I know of to have multiple channels on an audio tag, but check out this blog post on using multiple audio tags to simulate multiple channels. http://www.storiesinflight.com/html5/audio.html
I know this is a total hack but try this trick I came up with...
Go to the page below and type on the home row keys to play a blues riff (type multiple keys at the same time etc.)
http://davealger.com/jthump/
The way this works is to create invisible <iframe> components that play a sound before destroying the frame.
I know it is a total hack and I look forward to better HTML 5 multi-channel audio support in the future.

Is there a reliable way to time javascript animations with an audio file playing in the browser?

For example, I want the page to play an audio file while at the same time have some bullets slide into view at just the right moment that said bullet is talked about in the audio file. A similar effect would also be used for closed captioning. When I say reliable I mean specifically that the timing will be consistent across many common platforms (browser/OS/CPU/etc) as well as consistent in different sessions on the same platform (they hit refresh, it works again just as it did before, etc).
NOTE: It's OK if the answer is 'NO', but please include at least a little quip about why that is.
Check out this animation, which synchronizes a 3D SVG effect to an audio file.
The technique is explained in a blog post at http://mrdoob.com/blog/page/3. Look for the one entitled "svg tag+audio tag = 3D waveform". The key is to create a table of volume values corresponding to the audio file.
You'll obviously have some work to do in studying this example and the Javascript it uses to adapt it to your scenario. And it will probably only work in browsers that support HTML5.
Given the current situation and HTML5 support, I would solve this using Flash.

Recording and storing high-res hand drawing

Are there any advanced solutions for capturing a hand drawing (from a tablet, touch screen or iPad like device) on a web site in JavaScript, and storing it on server side?
Essentially, this would be a simple mouse drawing canvas with the specialty that its resolution (i.e. the number of mouse movements it catches per second) needs to be very high, otherwise round lines in the drawing will become "polygonal" when moving the pen / mouse fast:
(if this weren't the case, the inputDraw solution suggested by #Gregory would be 100% perfect.)
It would also have to have a high level of graphical quality, i.e. antialias the penstroke. Nothing fancy here but a MS Paint style, 1x1 Pixel stroke won't cut it.
I find this a very interesting thing in general, seeing as Tablet PCs are becoming at least a bit more common. (Not that they get the attention I feel they deserve).
Any suggestions are highly appreciated. I would prefer an Open Source solution, but I am also open to proprietary solutions like ActiveX controls or Java Applets.
FF4, Chrome support is a must; Opera, IE8/9 support is desired.
Please note that most "canvas" libraries around, and most answers to other questions similar to mine, refer to programmatically drawing onto a canvas. This is not what I am looking for. I am looking for something that records the actual pen or mouse movements of the user drawing on a certain area.
Starting a bounty out of curiosity whether anything has changed during the time since this question was asked.
I doubt you'll get anything higher resolution than the "onmousemove" event gives you, without writing an efficient assembler program on some embedded system custom built for the purpose. You run inside an OS, you play by the OS's rules, which means you're limited by the frequency of the timeslices an OS will give you. (usually about 100 per second, fluxuating depending on load) I've not used a tablet that can overcome the "polygon" problem, and I've used some high end tablets. Photoshop overcomes the problem with cubic interpolation.
That is, unless, you have a very special tablet that will capture many movement events and queue them up to some internal buffer, and send a whole packet of coordinates at a time when it dispatches data to the OS. I've looked at tablet api's though, and they only give one set of coordinates at a time, so if this is going to happen, you'll need custom hardware, and a custom driver, and custom apis that can handle packets of multiple coordinates.
Or you could just use a damned canvas tag, the onmousemove event, event.pageX|pageY some cubic interpolation, the "toDataURI" api of canvas, post the result to your php script, and then just say you did all that other fancy stuff.
onmousemove, in my tests, will give you one event per pixel of movement, limited only by the speed of the event loop in the browser. You'll get sparse data points (polygons) with fast movement and that's as good as it gets without a huge research grant and a hardware designer. Deal.
there are some applets for this in the oekaki world: Shi painter, Chibipaint or PaintBBS. Here you have php classes for integration.
Drawings produced by these applets can have quite good quality. If you register in oekakicentral.com you can see all the galleries and some drawings have an animation link that shows how was it drawn (it depends on the applet), so you can compare the possibilities of the applets. Some of them are open source.
Edit: See also this made in HTML 5.
Have a look at <InputDraw/>: a flash component that turns freehand drawing into SVG. Then you could send back the generated SVG to your server.
It's free for non commercial use. According to their site, commercial use price is 29€. It's not open source though.
IMHO it's worth a look.
Alternatively, you implement something based on svg-edit which is open source and uses jQuery (demo). If requires the Google Frame Plugin for IE6+ support though.
EDIT: I just found the svg-freehand-signature project (demo) that captures your handwritten signature and sends it to a server as a SVG using POST. It's distributed as a straight-forward and self-contained zip (works out of the box with Safari and Firefox, you may want to combine it with svgweb that brings SVG support to Internet Explorer).
EDIT: I successfully combined Cesar Oliveira's canvaslol (just look at the source of the page to see how it works) with ExplorerCanvas to have something on IE. You can also have a look at Anne van Kesteren's Paintr experiment.
markup.io is doing that with an algorithm applied after the mouseup.
I asked a similar question recently, and got interesting but not satisfying answers: Is there any way to accelerate the mousemove event?

Categories

Resources