How to get best possible resolution for camera photo using HTML5? - javascript

I found this one https://webrtchacks.github.io/WebRTC-Camera-Resolution/ It is about the video and I assume that resolution for photo can be much better than for video for the same camera.
Is there a way to check camera's best resolution and make a photo? (using HTML and JavaScript)

Video and photos are essentially the same thing. There isn't really a "take photo" API like there is a difference on your physical camera or phone.
To take a photo, you just turn on the video camera for a second, save a still frame, and then that is your photo.
Thus, everything that applies for video applies for photo.

Related

How to display an MVIMG (google camera motion photo) in browser?

Google camera makes a so-called 'motion photo', that contains a short, audioless mp4 video clip from before/after taking the photo. This is appended to the JPEG file, its precise location is noted in EXIF tags.
Now I want to show this in my blog. Problem is, I haven't found a single JS gallery that would offer support for this functionality. Short of implementing one myself, is there maybe a solution to (optionally) show the video component of MVIMG files in a browser, upon user interaction (e.g. via click, like in google photos)?

Changing HTML video stream source/quality on the fly

Background:
I'm working on a video project with 50+ short videos (10min, 720p) that I want to present online. My current architecture is to placing 16 video tags in a 4x4 grid, and then randomly setting their source on load using JavaScript, and on click zooming a video to cover the full screen until clicked again.
The problem:
Each video in 720p webm is around 80mb. With 16 videos that is 1.3GB totally, or 130MB per minute, or 2MB per second. Which is a ridiculous amount of data, I think, maybe I'm wrong. The each video is so big (80mb) is to support the zoom-full-screen feature.
My idea for a solution:
Each video in two resolutions, and use the low resolution for the grid layout, and the higher resolution on the click-to-zoom.
My question: How to make this smooth? Can I preload the high resolution video on click in the background at the position of the low resolution video? And make the shift in the CSS transform? Or is there a better way to do this?
Secondary question: How to host this online? Can I put the videos on vimeo maybe? Right now I'm using wordpress.com hosting.
The normal way to achieve something like that is to encode the video using an adaptive bitrate format. The two primary formats for that would either be HLS or MPEG-DASH. Most online encoding platforms can provide those as outputs. Normally you would encode 5-6 different qualities (this helps with users that are on wifi, where bandwidth might constantly be changing) but you could easily encode it in just two different qualities.
Normally the players would be able to select the right quality automatically, but you can manage that yourself if you want.
If you are going to use HLS, you can use hls.js and its Quality Switch API. For MPEG-DASH, a good player to use would be Shaka Player and then set it like this:
player.configure({enableAdaption: false});
player.selectVideoTrack(trackId);
If you want to switch specifically on fullscreen, just listen for the fullscreen events on the players.

Allow video download from html5 canvas

I am building a small app that allows to add css3 filters like grayscale in a video and download it. The video won't be longer than 6 seconds. So, I am first loading the video in the canvas and then applying the filter that user demands. Now I want user to be able to download the filtered video. The canvas.toDataURL() is only meant for images. Is there any high level canvas api to achieve this?
Thank you
Not that I know of. I think this is something that should be done server-side. Either send the raw video to the server and tell it what filters were applied so you can re-create the effect on the server OR use the solution proposed here capturing html5 canvas output as video or swf or png sequence? (hint: it's also server side)

How to compare the video/image quality using protractor?

If you see this image, I'm trying to analyze Video1 with Video4 or Video2 with Video3
In a peer-to-peer WebRTC connection I'm trying to compare the video input of peer1 to the output on peer2. Basically I'm testing the quality of the video, and theoretically I can do that by :
Checking the video itself frame by frame.
Taking a screen shot on both the sides and checking the image resolution.
I've seen this google video where they have mentioned a complex but clever idea of feeding a barcode stitched video and comparing the unique frame ID with each other. But, its written in C and I'm using protractor
Is there anyone who have tried to do calculate an image resolution of an image or analyzing video frames? Any help would be appreciated, thanks
The webrtc testing tool from testrtc has some code related to that. In particular the cam resolution test which extracts the video from the canvas.
If you want to feed a special video stream from a file, that is doable with the use-file-for-fake-video-capture flag.

getUserMedia lock focus/exposure

I am using navigator.getUserMedia with constraints to access the user's webcam, using the feed as the source of an HTML <video> and then copying its stream to drawImage a <canvas> context. I'm doing all this so I can take a snapshot at intervals.
What I would like to do is, once the page starts taking snapshots, lock the getUserMedia camera's focus/exposure, so that in between snapshot intervals the environment can change without the light balance changing or the camera refocusing.
Does anyone know if this is possible on the JS side?
Flagged as duplicate of: Take photo when the camera is automatically focused
Firsly - though perhaps bad practice here - I will link to why MediaCapture might be blurry or grainy than expected:
Why the difference in native camera resolution -vs- getUserMedia on iPad / iOS?
In short: MediaCapture does a lot of transformations on the media source which may cause blury or grainy images.
To solve this use ImageCapture:
The ImageCapture API enables control over camera features such as
zoom, brightness, contrast, ISO and white balance. Best of all, Image
Capture allows you to access the full resolution capabilities of any
available device camera or webcam. Previous techniques for taking
photos on the Web have used video snapshots, which are lower
resolution than that available for still images.
To solve your problem:
You can solve this via UX and a zoom slider. Below is information on how to achieve this with ImageCapture (still images). MediaCapture (video feed) does not allow this functionality. You could use MediaCapture and have a button such as "Manual Mode" and allow a user to pick the correct zoom to take the photo.
You could also "emulate" a camera by having an update loop doing n ImageCaptures per update and have a zoom slider.
https://developers.google.com/web/updates/2016/12/imagecapture
And here is an example on how to use it/polyfill: https://github.com/GoogleChromeLabs/imagecapture-polyfill
Makesure you use the latest getUserMedia polyfills which handle crossplatform support: https://www.npmjs.com/package/webrtc-adapter
Hope this helps
We had the same problem some time ago. It only happens on some devices... but even devices from the same brand had different behaviours. Something in the browser / OS version / driver combination were broken. In some devices focus was locked and in others didn't. We look over all the API, we tried dozens of variations in the initialization code but finally we concluded there was no apparent solution.
We added a button to restart all in order to mitigate the problem somehow...

Categories

Resources