How to find out reason for MediaStreamTrack.onended event - javascript

I have a website that is used to take pictures, user has to take one picture with main camera and then second picture (selfie) with front camera. All those pictures are saved as blobs in db and can be viewed in a separate page.
Issue: sometimes one of the photos are plain black and it seems that mediaStreamTrack ends randomly which causes the image to arrive to DB as plain black. (this mostly happens with iPhones, but I have seen desktops with win10 have the same issue since I log userAgent and made a function that logs some events like 'camera permission requested', 'permission granted', 'stream ended').
Is there a way to obtain why onended event was fired?
function startVideo(facingMode = 'environment') {
if (this.mediaStream && facingMode === 'user') {
// stop previous stream to start a new one with different camera
this.mediaStream.getVideoTracks[0].stop();
}
const videoEl = video.current;
const canvasEl = canvas.current;
navigator.mediaDevices
.getUserMedia({
video: {
facingMode:
facingMode === "user" ? { exact: facingMode } : facingMode,
height: {
min: 720,
max: 720
},
width: {
min: 720,
max: 1280
},
advanced: [{ aspectRatio: 1 }]
},
audio: false
})
.then((stream) => {
if (this.mediaStream !== stream) this.mediaStream = stream;
videoEl.srcObject = this.mediaStream;
videoEl.play();
this.mediaStream.getVideoTracks()[0].onended = () => {
console.log('stream ended unexpectedly');
this.sendUserLog('stream ended');
};
})
.catch((error) => {
if (error.name === 'OverconstrainedError') {
this.sendUserLog('camera quality too low')
} else {
console.log("An error occurred: " + error));
this.sendUserLog('permission denied');
}
})
}
I also tried to log the onended event, but it only shows the source mediaStream properties and and type: 'ended', but I already know that since the event fired.
Also since most of these cases happen with mobiles, it seems implausible that camera was disconnected manually.

Related

Is there a way to acess usb camera feed rather than internal cameras on video capture?

I currently have a form with a file input that accepts video captures. By default it opens the devices internal camera, in this case, a tablets front camera. The problem is that I have a endoscopic industrial camera connected to the tablets usb port, the camera works fine on third party apps, but i would like to use it for the video capture input, meaning that i want to use the endoscopic camera to record the video rather than the internal ones. Is this possible with HTML5 capture and javasrcipt?
I already tried all variations of getUserMedia() but they all result in accessing the internal cameras only.
Here is the code that im using for testing.
<video width="1280" height="720"></video>
<p onclick="capture()">capture</p>
<script>
const constraints = {
audio: false,
video: {
deviceId: '3caac644b0fa5838e6e720169a3b3c52e18f449625455e65924eb112a22f8bd9',
width: { ideal: 1280 },
height: { ideal: 720 }
}
};
function capture(){
navigator.mediaDevices.getUserMedia(constraints)
.then((mediaStream) => {
const video = document.querySelector('video');
video.srcObject = mediaStream;
video.onloadedmetadata = () => {
video.play();
};
})
.catch((err) => {
// always check for errors at the end.
console.error(`${err.name}: ${err.message}`);
});
}
</script>

How to parse the data from a webRTC stream?

I am trying to get into the core of the webRTC stream an access the raw data that is coming into the client. I am in react-native and am creating the stream as such:
if (!stream) {
(async () => {
const availableDevices = await mediaDevices.enumerateDevices();
const {deviceId: sourceId} = availableDevices.find(
// once we get the stream we can just call .switchCamera() on the track to switch without re-negotiating
// ref: https://github.com/react-native-webrtc/react-native-webrtc#mediastreamtrackprototype_switchcamera
device => device.kind === 'videoinput' && device.facing === 'front',
);
const streamBuffer = await mediaDevices.getUserMedia({
audio: true,
video: {
mandatory: {
// Provide your own width, height and frame rate here
minWidth: 500,
minHeight: 300,
minFrameRate: 30,
},
facingMode: 'user',
optional: [{sourceId}],
},
});
setStream(streamBuffer);
})();
}
This streamBuffer that is coming in is a URL, example: 52815B95-4406-493F-8904-0BA74887550C.
I have yet to find a way to actually access the data behind this url. I know that in reactJS, they send the url to a video element and that parses the data and spits out a jpeg image. I am trying to implement my own version of that parsing. However, I can't seem to find a way to even access the bit data that is coming into through the stream. Thanks in advance for any advice.

How to choose which camera to activate with getUserMedia() for barcode scanning function?

I am building a web app and trying to capture barcode with my camera. I am using QuaggaJS library.
I know that the functionality works, because when I used my app on the laptop webcam, it captured the barcode, but with about only 30% accuracy, but when I tried using this on my mobile it couldn't capture the barcode at all. I think the reason for that was that the app picked the ultrawide lens on my device which distorts the image too much.
From reading the docs I saw this:
To require the rear camera, use:
{ audio: true, video: { facingMode: { exact: "environment" } } }
Which picks the rear camera however, what happens when users have 5 rear cameras? how do I know the right one?
here is my code:
useEffect(() => {
Quagga.init(
{
inputStream: {
name: "Live",
type: "LiveStream",
target: ".scannerArea", // Or '#yourElement' (optional),
constraints: {
facingMode: "environment",
},
},
decoder: {
readers: ["code_128_reader", "upc_reader", "ean_reader"],
},
},
function (err) {
if (err) {
console.log(err);
return;
}
console.log("Initialization finished. Ready to start");
Quagga.start();
}
);
Quagga.onDetected((result) => {
let last_code = result.codeResult.code;
alert(last_code);
Quagga.stop();
});
}, []);

How to send (Screen sharing stream) via SIPJS to the other caller

I'm using SIPJS to make calls between 2 callers using web browser.
Now i want to add (Screen sharing) feature , so far i managed to open chrome screen sharing window and i get the stream and played it in video element.
But what i really need is to send this stream to the other caller so he can see my screen sharing.
What I've tried so far:
After i get the (screen sharing stream) i pass it to session.sessionDescriptionHandler.peerConnection , and then catch the stream (or track) using these events onTrackAdded , onaddTrack , onaddStream , onstream
But none of there events get anything.
Also tried to send the stream with video constraint before the call start
video: {
mandatory: {
chromeMediaSource: 'desktop',
// chromeMediaSourceId: event.data.sourceId,
maxWidth: window.screen.width > 1920 ? window.screen.width : 1920,
maxHeight: window.screen.height > 1080 ? window.screen.height : 1080
},
optional: []
}
Even tried to send the stream with video constraint
navigator.mediaDevices.getDisplayMedia(constraints)
.then(function(stream) {
//We've got media stream
console.log("----------then triggered-------------");
var options = {
sessionDescriptionHandlerOptions: {
constraints: {
audio: true,
video: stream
}
}
}
pub_session = userAgent.invite(reciver_name,options);
})
.catch(function(error) {
console.log("----------catch-------------");
console.log(error);
});
also didn't work.
Here is my Code
First get the screen sharing stream and send it to the other user
// Get screen sharing and send it.
navigator.mediaDevices.getDisplayMedia(constraints)
.then(function(stream) {
//We've got media stream
console.log("----------then triggered-------------");
var pc = session.sessionDescriptionHandler.peerConnection;
stream.getTracks().forEach(function(track) {
pc.addTrack(track, stream);
});
})
.catch(function(error) {
console.log("----------catch-------------");
console.log(error);
});
Then catch that stream at the other side
// Reciving stream or track
userAgent.on('invite', function (session) {
session.on('trackAdded', function() {
console.log('-------------trackAdded triggered--------------');
});
session.on('addTrack', function (track) {
console.log('-------------addTrack triggered--------------');
});
session.on('addStream', function (stream) {
console.log('-------------addStream triggered--------------');
});
session.on('stream', function (stream) {
console.log('-------------stream triggered--------------');
});
});
But still get nothing from that code above
So how can i pass that stream or track to the other caller after the call starts ?
thank you so much
I Found the solution from some great gentlemen in SIPJS groups
Hope the answer will help someone as it helped me
var option = {video: {mediaSource: 'screen'}, audio: true};
navigator.mediaDevices.getDisplayMedia(option)
.then(function(streams){
var pc = session.sessionDescriptionHandler.peerConnection;
var videoTrack = streams.getVideoTracks()[0];
var sender = pc.getSenders().find(function(s) {
return s.track.kind == videoTrack.kind;
});
console.log('found sender:', sender);
sender.replaceTrack(videoTrack);
}, function(error){
console.log("error ", error);
});

WebRTC merge video MediaStreamTracks into one on client side

How i can merge 2 video streams into one on client side, and send it through WebRTC PeerConnection?
For example i have 2 video streams like this
navigator.getUserMedia({ video: true }, successCamera, error); // capture camera
function successCamera(streamCamera) {
vtCamera = streamCamera.getVideoTracks()[0]
navigator.getUserMedia({ // capture screen
video: {
mandatory: {
chromeMediaSource: 'screen',
maxWidth: 1280,
maxHeight: 720
}
}
}, successScreen, error);
function successScreen(streamScreen) {
vtScreen = streamScreen.getVideoTracks()[0]
mergedVideoTracks = vtScreen + vtCamera; // How i can merge tracks in one??
finallyStream = streamScreen.clone()
finallyStream.removeTrack( finallyStream.getVideoTracks()[0] )
finallyStream.addTrack( mergedVideoTracks )
finallyStream // I need to send this through WebRTC PeerConnection
}
}
function error(error) {
console.error(error);
}
As you can see i have vtScreen and vtCamera as MediaStreamTracks. I need to set screen as background, and camera as small frame in right bottom corner. And send it through WebRTC PeerConnection as one stream
Yes, i can merge it on canvas, but i don't know how i can send this canvas as MediaStreamTrack. =(

Categories

Resources