Why won't my webstream show? - javascript

I tried to test the code below on an android device, it didn't work (the switch button appeared but the camera output didn't.). I then decided to test it on a Mac and it worked (it just showed the camera output and the button, the button didn't do anything because there is no back camera.). Here is my code (the javascript portion of it.):
var constraints = {
audio: true,
video: {
width: 1280,
height: 720
}
};
navigator.mediaDevices.getUserMedia(constraints).then(function(mediaStream) {
var video = document.querySelector('#webcam');
video.srcObject = mediaStream;
video.onloadedmetadata = function(e) {
video.play();
};
}).catch(function(err) {
console.log(err.name + ": " + err.message);
});
var front = false;
document.getElementById('flip-button').onclick = function() {
front = !front;
};
var constraints = {
video: {
facingMode: (front ? "user" : "environment")
}
};
Here's the HTML portion of my code:
<video id="webcam">
</video>
<button Id="flip-button">switch
</button>
here's the css portion of my code:
#webcam {} #flip-button {
background-color: #202060;
height: 15%;
width: 20%;
border: none;
margin-left: 40%;
}
Thanks for your time.

You got your id wrong. Replace #webcam with #video.
Or just remove this line:
var video = document.querySelector('#webcam');
The latter works because ids are implicitly available in global scope for backwards compatibility. Some people frown on this, but it's neat for tidy fiddles.
Some beginner's advice: Always check your browser's web console for errors. Here it said:
TypeError: video is null
which was the clue that the result from querySelector was null.
PS: You also have two competing definitions of constraints, leaving your facingMode unused. Lastly, flipping front won't do much, unless front is used again, which it is not.

Related

Javascript attributes not changing after once they have changed already

I have a webpage where I display streaming videos (WebRTC Conference) dynamically and for that I also have to set there video element size dynamically depending on number of videos.
I realized that my javascript code only changes the size of video elements only once (when I apply it for the first time). Or may be I am doing something wrong. Because it works fine if I hardcode number of videos (only html and js; no webrtc) and js has to set their size in one go (at once).. my webrtc code is in a way that in beginning its always one video, as it loops through (when new video comes in) then my code not able to handle the size of all elements.
My code:
// I have to reuse this code everytime I have to change size (properties) of all video tags in the document
function layout() {
for(var i=0;i<videoIds.length;i++){ // I've tried with ids
var localView = document.getElementById(videoIds[i]);
localView.setAttribute('position', 'relative');
localView.setAttribute('height', '50vh');
localView.setAttribute('width', '50vw');
localView.setAttribute('object-fit', 'cover');
}
var localView = document.getElementsByClassName('class'); // I've tried with class name
for(var i=0;i<localView.length;i++){
localView.setAttribute('position', 'relative');
localView.setAttribute('height', '50vh');
localView.setAttribute('width', '50vw');
localView.setAttribute('object-fit', 'cover');
}
var localView = document.getElementsByTagName('video'); // I've tried with tag name
for(var i=0;i<localView.length;i++){
localView.setAttribute('position', 'relative');
localView.setAttribute('height', '50vh');
localView.setAttribute('width', '50vw');
localView.setAttribute('object-fit', 'cover');
}
}
Am I missing anything here? please help.
Is it possible that my code is not getting completed by the time code is re-called again, or some how code is skipped. Example:
const initSelfStream = (id) => {
(async () => {
try {
await navigator.mediaDevices.getUserMedia(
{
//video: { facingMode: selectedCamera },
video: video_constraints,
audio: true
}).then(stream => {
const video = document.createElement("video");
loadAndShowVideoView(video, stream, localId);
});
} catch (err) {
console.log('(async () =>: ' + err);
}
})();
}
initSelfStream(id);
// Some Code
pc.ontrack(e){
const video = document.createElement("video");
loadAndShowVideoView(video, e.tracks[0], remoteId);
}
const loadAndShowVideoView = (video, stream, id) => {
if(totalUsers.length == 1){
console.log('Currently ' + totalUsers.length + ' User Are Connected.');
video.classList.add('video-inset', 'background-black');
video.setAttribute("id", id);
video.style.position = "absolute"
video.srcObject = stream;
video.style.height = 100+"vh"
video.style.width = 100+"vw"
video.muted = true;
video.style.objectFit = "cover"
appendVideo(video, stream);
} else if(totalUsers.length == 2){
// I add new video with desirable height & width
// Then call and run original function on the top with element number & condition
layout('8 videos', "size should be 25vh, 25vw");
}
}
const appendVideo = (video, stream) => {
container.append(video);
video.play()
}
Html is very simple. One div in body that contain videos:
<body>
<div class="views-container background-black" id="container"></div>
</body>
Try to combine all style attributes. Instead of:
video.style.position = "absolute"
video.style.height = 100+"vh"
video.style.width = 100+"vw"
video.style.objectFit = "cover"
You can:
video.setAttribute('style', 'position: absolute; height: 100vh; width: 100vw; object-fit: cover;);
And since you already uses css classes, you don't have to repeat it over on your conditions and just:
video { position: absolute; top: 0; left: 0; obejct-fit: cover; }
Now you can change width and height with setAttribute.

iOS13 getUserMedia not working on chrome and edge

Me and my friend are building an app that requires camera access and we're having some issue with getting the camera working with iOS (we're using iOS13):
Safari freeze right after getting the camera content, chrome and edge doesn't acquire camera access at all. Our code is as follow:
let windowWidth=window.innerWidth;
let windowHeight=window.innerHeight;
function isMobile() {
const isAndroid = /Android/i.test(navigator.userAgent);
const isiOS = /iPhone|iPad|iPod/i.test(navigator.userAgent);
return isAndroid || isiOS;
}
async function setupCamera() {
video = document.getElementById('video');
console.log("a")
video.setAttribute('autoplay', '');
video.setAttribute('muted', '');
video.setAttribute('playsinline', '');
const stream = await navigator.mediaDevices.getUserMedia({
'audio': false,
'video': {
facingMode: 'user',
width: mobile ? undefined : windowWidth,
height: mobile ? undefined : windowHeight
},
});
console.log("b")
video.srcObject = stream;
return new Promise((resolve) => {
video.onloadedmetadata = () => {
resolve(video);
};
});
}
According to the console, 'a' always gets printed but never 'b'. Any clue on what's wrong will be greatly appreciated!
Update - 19/11/2020
WKWebView can use getUserMedia in iOS 14.3 beta 1.
https://bugs.webkit.org/show_bug.cgi?id=208667
https://bugs.chromium.org/p/chromium/issues/detail?id=752458
Browser compatibility
I've been following this problem for years via other posts (e.g. NotReadableError: Could not start source) . As of this date there is no access to getUserMedia outside of Safari standalone view (webapp) or the iOS Safari app.
Any browser on iOS other than Safari does not have getUserMedia access. This is because under the hood they are using WKWebView.
A bug ticket has been filed specifically for WKWebView. No support. https://bugs.webkit.org/show_bug.cgi?id=208667
Updates to standalone mode gaining getUserMedia access in iOS 13.4 https://bugs.webkit.org/show_bug.cgi?id=185448#c6
Safari freezing
You are passing in constraints which are invalid (e.g. window width and height). You need to use standard camera resolutions e.g. 640x480, 1280x720, etc. When you use an atypical resolution WebRTC spec states the browser will try emulate your desired feed however this often results in the camera freezing or looking warped.
If you are trying to capture the front camera and fullscreen it you can look at: getUserMedia (Selfie) Full Screen on Mobile
There also maybe 1 or 2 bugs with the use of async/await. Below is a demo which should work (However within stackoverflow it will error due to security permissions, but should work locally or hosted). Let me know if I can help further.
function isMobile() {
const isAndroid = /Android/i.test(navigator.userAgent);
const isiOS = /iPhone|iPad|iPod/i.test(navigator.userAgent);
return isAndroid || isiOS;
}
async function setupCamera() {
const isPortrait = true; // do logic here
let video = document.getElementById('video');
console.log("Getting video");
video.setAttribute('autoplay', '');
video.setAttribute('muted', '');
video.setAttribute('playsinline', '');
console.log("Calling getUserMedia");
return new Promise((resolve) => {
(async() => {
await navigator.mediaDevices.getUserMedia({
'audio': false,
'video': {
facingMode: 'user',
width: isPortrait ? 480 : 640,
height: isPortrait ? 640 : 480,
},
})
.then((stream) => {
console.log("Got getUserMedia stream");
video.srcObject = stream;
video.play();
resolve(true);
})
.catch((err) => {
console.log("Encountered getUserMedia error", err);
resolve(false);
});
})();
});
}
(async() => {
const ret = await setupCamera();
console.log(`Initialised camera: ${ret}`)
})();
html,
body {
height: 100%;
margin: 0;
}
div {
position: relative;
min-height: 100%;
min-width: 100%;
overflow: hidden;
object-fit: cover;
}
video {
width: 480px;
height: 640px;
background-color: black;
}
<div><video id="video"></video></div>

Get device labels onpageshow

I'm coding a web app, where user can use his camera (and choose which one to use). The problem is that I want user to be available to choose the camera before it is enabled. In the current code, when user turns on a page, he sees an empty list of cameras and when he enables the camera stream, dropdown list populates with camera names. I want the dropdown list populate when he turns on that web page.
P.S. when I stop() the camera, it disables camera and gives just a black screen. Why it is black instead of background colour?
CameraStreamView.cshtml
#using Api.Models
#{
ViewBag.Title = "Smart Vision";
Layout = "~/Views/Shared/_Layout.cshtml";
}
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link rel="stylesheet" href="~/Content/Contact-Form-Clean.css">
</head>
<body onpageshow="Init()">
<div id="container">
<video id="video" style="display: block; margin: 0 auto; margin-top: 30px;" width="300" height="400" autoplay></video>
<button id="enableStream" style="display: block; margin: 0 auto; margin-top: 20px; height: 70px; width: 200px" onclick="start()">Enable camera</button>
<button id="disableStream" style="display: block; margin: 0 auto; margin-top: 20px; height: 70px; width: 200px" onclick="stop()">Disable camera</button>
<label for="videoSource">Video source: </label><select id="videoSource"></select>
</div>
<script src="~/Scripts/GetCameraFeed.js"></script>
</body>
</html>
GetCameraFeed.js
const videoSelect = document.querySelector('select#videoSource');
const selectors = [videoSelect];
function gotDevices(deviceInfos) {
// Handles being called several times to update labels. Preserve values.
const values = selectors.map(select => select.value);
selectors.forEach(select => {
while (select.firstChild) {
select.removeChild(select.firstChild);
}
});
for (let i = 0; i !== deviceInfos.length; ++i) {
const deviceInfo = deviceInfos[i];
const option = document.createElement('option');
option.value = deviceInfo.deviceId;
if (deviceInfo.kind === 'videoinput') {
option.text = deviceInfo.label;
videoSelect.appendChild(option);
}
}
selectors.forEach((select, selectorIndex) => {
if (Array.prototype.slice.call(select.childNodes).some(n => n.value === values[selectorIndex])) {
select.value = values[selectorIndex];
}
});
}
function Init() {
navigator.mediaDevices.enumerateDevices().then(gotDevices).catch(handleError);
}
function gotStream(stream) {
window.stream = stream; // make stream available to console
video.srcObject = stream;
// Refresh button list in case labels have become available
return navigator.mediaDevices.enumerateDevices();
}
function handleError(error) {
console.log('navigator.getUserMedia error: ', error);
}
function start() {
const videoSource = videoSelect.value;
const constraints = {
video: { deviceId: videoSource ? { exact: videoSource } : undefined }
};
navigator.mediaDevices.getUserMedia(constraints).then(gotStream).then(gotDevices).catch(handleError);
}
function stop() {
video.pause();
video.src = "";
stream.getTracks().forEach(track => track.stop());
console.log("Stopping stream");
}
What you want is explicitly disallowed, due to fingerprinting concerns. Details about a user's setup let web sites identify them uniquely on the web, a privacy concern.
Once users trust your site with their camera and microphone, this information is considered relevant to share.
The working group determined this to be a reasonable trade-off, for several reasons:
Most desktop users have only one camera or none.
Most phone users have two, but you can use the facingMode constraint to pick.
Given 1 and 2, an up-front choice is arguably an inferior user experience for most.
I would consider changing your code to ask for the default camera the first time, and give users a choice to change it after the fact, should they need to. It's what most WebRTC sites do.
Note that this should only be a problem the first time a user visits your site. Provided they've granted camera or microphone just once in the past, you should be able to see the labels, at least in Chrome.
Unlike Chrome, Firefox does not persist permission implicitly, so you'd need a little more work to get labels on page-load on repeat visits:
enumerateDevices returns a deviceId for each device, which is persisted for your site provided the user has granted (or will grant within this session) camera or microphone at least once. You can use cookies or local storage to correlate deviceIds to device labels. This also survives people manually revoking permission in Chrome.

Dynamic timed text track for captions on canvas video

I have researched and experimented my brains out on this one. I have built a working application that uses the HTML5 canvas to play videos and interact with users through a web page (a game, sort of). Works like a charm, but I want to add captions to the video to make it more accessible. I don't even seem to be able to get the WebVTT file to load and I've reached the stage of voodoo programming by trying examples from the web and am making no progress. I have tried to distill the application down to its bare minimum here in hopes someone can provide some insight into what I'm doing wrong.
The video plays when I click on the canvas, the "...waiting for cuechange event..." message stays up while it plays, and then it goes to the "video complete ... reload to try again" message when done (kept it simple by requiring a page reload to try the test again), so the basic mechanics seem to be working. It never gets into the "load" function for the text track (never displays "... loaded text track ..."), and the updateSubtitle() function is never invoked. If I insert a trackElement.readyStatus() call, it always returns 0 (unsurprisingly). As a further note, if I add a testStatus.innerHTML = "...loaded metadata..."; statement in the "loadedmetadata" listener, it does get there.
var canvasContext, videoElement, intervalHandle, testStatus, trackElement;
function processFrame() {
canvasContext.drawImage(videoElement, 193, 50, 256, 194);
}
function videoEnded() {
clearInterval(intervalHandle);
testStatus.innerHTML = "video complete ... reload to try again";
}
var count = 0;
function updateSubtitle(event) {
count = count + 1;
testStatus.innerHTML = "count" + count;
}
function clickHandler(event) {
testStatus.innerHTML = "...waiting for cuechange event...";
videoElement.setAttribute("src", "start.ogg");
videoElement.load();
intervalHandle = setInterval(processFrame, 25);
videoElement.play();
}
function init() {
var canvasElement, textTrack;
canvasElement = document.createElement("canvas");
videoElement = document.createElement("video");
videoElement.addEventListener("loadedmetadata", function() {
trackElement = document.createElement("track");
trackElement.kind = "captions";
trackElement.label = "English";
trackElement.srclang = "en";
trackElement.src = "start_en.vtt";
trackElement.addEventListener("load", function() {
testStatus.innerHTML = "... loaded text track ...";
trackElement.mode = "showing";
});
videoElement.appendChild(trackElement);
trackElement.addEventListener("cuechange", updateSubtitle, false);
});
var mainDiv = document.getElementById("arghMainDiv");
canvasElement.setAttribute("id", "mediaScreen");
canvasElement.width = 640;
canvasElement.height = 480;
var firstChild;
if(mainDiv.hasChildNodes()) {
firstChild = mainDiv.firstChild;
mainDiv.insertBefore(canvasElement, firstChild);
} else {
firstChild = mainDiv.appendChild(canvasElement);
}
testStatus = document.createElement("p");
testStatus.setAttribute("id", "testStatus");
mainDiv.insertBefore(testStatus, firstChild);
testStatus.innerHTML = "click on canvas to test";
canvasContext = canvasElement.getContext('2d');
canvasElement.addEventListener("click", clickHandler);
videoElement.addEventListener("ended", videoEnded);
}
#fatalError {
color: red;
}
#arghMainDiv {
text-align: center;
}
#mediaScreen {
border: 5px solid #303030;
margin-left: auto;
margin-right: auto;
display: block;
cursor: default;
}
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Problems With TextTrack Object</title>
<link rel="stylesheet" type="text/css" href="subargh.css">
<script src="subargh.js"></script>
</head>
<body onload="init()">
<div id="arghMainDiv">
</div>
</body>
</html>
If you want the video and WebVTT files I'm using (just a few seconds long), they are here and here (respectively). I should also mention that if I play the video using VLC, that it recognizes and plays the .vtt file as subtitles properly on the video (so it appears to be well-formed). I am running my tests on Firefox 57.0.4 (64-bit) on a Windows 7 system if that makes any difference (but I am under the impression that Firefox is mostly fixed where Timed Text Tracks are concerned now).

quaggaJS: how to "pause" the decoder

I'm using quaggaJS (https://github.com/serratus/quaggaJS) to read barcodes in a web application. It works very well, and this is my initialization code:
Quagga.init({
inputStream: {
name: "Live",
type: "LiveStream",
constraints: {
width: 1280,
height: 720,
facingMode: "environment"
}
},
decoder: {
readers: ["code_39_reader"],
debug: {
drawBoundingBox: true,
showFrequency: false,
drawScanline: true,
showPattern: true
},
multiple: false
},
locator: {
halfSample: true,
patchSize: "medium"
}
}, function (err) {
if (err) {
alert(err);
return;
}
Quagga.registerResultCollector(resultCollector);
Quagga.start();
});
here the handling of the onDetected event:
Quagga.onDetected(function (data) {
Quagga.stop(); // <-- I want to "pause"
// dialog to ask the user to accept or reject
});
What does that mean?
When it recognize a barcode I need to ask the user if he wants to accept or reject the decoded value - and therefore repeat the process.
It would be nice to leave the captured image so he can actually see what he just captured.
Quagga.stop() works in most cases but it's not reliable because sometimes the canvas will turn black. I guess this is due the behavior of the stop() method:
the decoder does not process any more images. Additionally, if a camera-stream was requested upon initialization, this operation also disconnects the camera.
For this reason I'm looking for a way to pause the decoding, so the last frame is still there and the camera has not disconnected yet.
Any suggestion how to achieve this?
A better way to do it would be to first pause the video and then call Quagga.stop() so that the video element that Quagga creates for you will be paused and you don't see the blacked out image. Additionally, you can also have a restart/re-scan button to resume or rather restart the scanning process.
To get the view element you can do something like below:
Quagga.onDetected(function (result) {
var cameraFeed = document.getElementById("cameraFeedContainer")
cameraFeed.getElementsByTagName("video")[0].pause();
return;
});
You can get the image frame via Quagga.canvas.dom.image.
With that, you can overlay the videostream.
HTML
<div class="scanArea">
<div id="interactive" class="viewport"></div>
<div class="scanArea__freezedFrame"></div>
</div>
CSS
.scanArea {
position: relative;
}
.scanArea__freezedFrame {
position: absolute;
left: 0;
top: 0;
}
JavaScript
Quagga.onDetected(function(result) {
var canvas = Quagga.canvas.dom.image;
var $img = $('<img/>');
$img.attr("src", canvas.toDataURL());
$('.scanArea__freezedFrame').html($img);
});

Categories

Resources