I am trying to get into the core of the webRTC stream an access the raw data that is coming into the client. I am in react-native and am creating the stream as such:
if (!stream) {
(async () => {
const availableDevices = await mediaDevices.enumerateDevices();
const {deviceId: sourceId} = availableDevices.find(
// once we get the stream we can just call .switchCamera() on the track to switch without re-negotiating
// ref: https://github.com/react-native-webrtc/react-native-webrtc#mediastreamtrackprototype_switchcamera
device => device.kind === 'videoinput' && device.facing === 'front',
);
const streamBuffer = await mediaDevices.getUserMedia({
audio: true,
video: {
mandatory: {
// Provide your own width, height and frame rate here
minWidth: 500,
minHeight: 300,
minFrameRate: 30,
},
facingMode: 'user',
optional: [{sourceId}],
},
});
setStream(streamBuffer);
})();
}
This streamBuffer that is coming in is a URL, example: 52815B95-4406-493F-8904-0BA74887550C.
I have yet to find a way to actually access the data behind this url. I know that in reactJS, they send the url to a video element and that parses the data and spits out a jpeg image. I am trying to implement my own version of that parsing. However, I can't seem to find a way to even access the bit data that is coming into through the stream. Thanks in advance for any advice.
Related
In context of an Electron app, I already have a main BrowserWindow window which run my application fine.
I want to be able to send data to a new BrowserWindow. I know there is lot tricks BUT :
My second BrowserWindow is not already created. I can't use the event chain for send data (like this tuto : tuto send data between browserWindows)
My data are not from the database. I can't fetch/get data into a vue component lifecycle method like this
<template>
<div id="app">
{{data}}
</div>
</template><script>export default {
name: "App",
data() {
return {
data: {}
}
},
beforeMount(){
this.getName();
},
methods: {
async getName(){
const res = await fetch('https://api.agify.io/?name=michael');
const data = await res.json();
this.data = data;
}
}
};
</script>
My data are too long for sending that into the url like this
url = params && "id" in params ? "category/edit/" + params["id"] : "category/edit"
this.browserWindow.loadURL(process.env.WEBPACK_DEV_SERVER_URL + url)
My data are not from the post of the form. I can't use the loadURL method of BrowserWindow with postData property like this
win.loadURL('http://localhost:8000/post', {
postData: [{
type: 'rawData',
bytes: Buffer.from('hello=world')
}],
extraHeaders: 'Content-Type: application/x-www-form-urlencoded'
})
Indeed I all know those tricks for sending data but my goal is to 'send' data into a BrowserWindow that I created in same time on the fly.
Is there a way with webContents electron ?
I would like to attach data with my creation of BrowserWindow that start the implementation like that, because I already know which data I want into my window :
createBrowserWindow()
{
return new BrowserWindow({width: 650, height: 800, show: false, frame: true,
webPreferences: {
webSecurity: false,
plugins: true,
nodeIntegration: (process.env.ELECTRON_NODE_INTEGRATION)},
enableRemoteModule: true
});
}
I have a website that is used to take pictures, user has to take one picture with main camera and then second picture (selfie) with front camera. All those pictures are saved as blobs in db and can be viewed in a separate page.
Issue: sometimes one of the photos are plain black and it seems that mediaStreamTrack ends randomly which causes the image to arrive to DB as plain black. (this mostly happens with iPhones, but I have seen desktops with win10 have the same issue since I log userAgent and made a function that logs some events like 'camera permission requested', 'permission granted', 'stream ended').
Is there a way to obtain why onended event was fired?
function startVideo(facingMode = 'environment') {
if (this.mediaStream && facingMode === 'user') {
// stop previous stream to start a new one with different camera
this.mediaStream.getVideoTracks[0].stop();
}
const videoEl = video.current;
const canvasEl = canvas.current;
navigator.mediaDevices
.getUserMedia({
video: {
facingMode:
facingMode === "user" ? { exact: facingMode } : facingMode,
height: {
min: 720,
max: 720
},
width: {
min: 720,
max: 1280
},
advanced: [{ aspectRatio: 1 }]
},
audio: false
})
.then((stream) => {
if (this.mediaStream !== stream) this.mediaStream = stream;
videoEl.srcObject = this.mediaStream;
videoEl.play();
this.mediaStream.getVideoTracks()[0].onended = () => {
console.log('stream ended unexpectedly');
this.sendUserLog('stream ended');
};
})
.catch((error) => {
if (error.name === 'OverconstrainedError') {
this.sendUserLog('camera quality too low')
} else {
console.log("An error occurred: " + error));
this.sendUserLog('permission denied');
}
})
}
I also tried to log the onended event, but it only shows the source mediaStream properties and and type: 'ended', but I already know that since the event fired.
Also since most of these cases happen with mobiles, it seems implausible that camera was disconnected manually.
I'm using SIPJS to make calls between 2 callers using web browser.
Now i want to add (Screen sharing) feature , so far i managed to open chrome screen sharing window and i get the stream and played it in video element.
But what i really need is to send this stream to the other caller so he can see my screen sharing.
What I've tried so far:
After i get the (screen sharing stream) i pass it to session.sessionDescriptionHandler.peerConnection , and then catch the stream (or track) using these events onTrackAdded , onaddTrack , onaddStream , onstream
But none of there events get anything.
Also tried to send the stream with video constraint before the call start
video: {
mandatory: {
chromeMediaSource: 'desktop',
// chromeMediaSourceId: event.data.sourceId,
maxWidth: window.screen.width > 1920 ? window.screen.width : 1920,
maxHeight: window.screen.height > 1080 ? window.screen.height : 1080
},
optional: []
}
Even tried to send the stream with video constraint
navigator.mediaDevices.getDisplayMedia(constraints)
.then(function(stream) {
//We've got media stream
console.log("----------then triggered-------------");
var options = {
sessionDescriptionHandlerOptions: {
constraints: {
audio: true,
video: stream
}
}
}
pub_session = userAgent.invite(reciver_name,options);
})
.catch(function(error) {
console.log("----------catch-------------");
console.log(error);
});
also didn't work.
Here is my Code
First get the screen sharing stream and send it to the other user
// Get screen sharing and send it.
navigator.mediaDevices.getDisplayMedia(constraints)
.then(function(stream) {
//We've got media stream
console.log("----------then triggered-------------");
var pc = session.sessionDescriptionHandler.peerConnection;
stream.getTracks().forEach(function(track) {
pc.addTrack(track, stream);
});
})
.catch(function(error) {
console.log("----------catch-------------");
console.log(error);
});
Then catch that stream at the other side
// Reciving stream or track
userAgent.on('invite', function (session) {
session.on('trackAdded', function() {
console.log('-------------trackAdded triggered--------------');
});
session.on('addTrack', function (track) {
console.log('-------------addTrack triggered--------------');
});
session.on('addStream', function (stream) {
console.log('-------------addStream triggered--------------');
});
session.on('stream', function (stream) {
console.log('-------------stream triggered--------------');
});
});
But still get nothing from that code above
So how can i pass that stream or track to the other caller after the call starts ?
thank you so much
I Found the solution from some great gentlemen in SIPJS groups
Hope the answer will help someone as it helped me
var option = {video: {mediaSource: 'screen'}, audio: true};
navigator.mediaDevices.getDisplayMedia(option)
.then(function(streams){
var pc = session.sessionDescriptionHandler.peerConnection;
var videoTrack = streams.getVideoTracks()[0];
var sender = pc.getSenders().find(function(s) {
return s.track.kind == videoTrack.kind;
});
console.log('found sender:', sender);
sender.replaceTrack(videoTrack);
}, function(error){
console.log("error ", error);
});
I am intrigued by Gatsby and my initial experiences with it have been very positive.
It's unclear how the static CDN-hosted model would dovetail with push notification functionality, and I would be appreciative of any guidance. Searching the web was to no avail.
I managed to add push notifications, following the Mozilla guide: https://developer.mozilla.org/es/docs/Web/API/ServiceWorkerRegistration/showNotification#Examples
In your gatsby-browser.js file, you can use onServiceWorkerUpdateFound to listen to updates and trigger a push notification, see code below
export const onServiceWorkerUpdateFound = () => {
const showNotification = () => {
Notification.requestPermission(result => {
if (result === 'granted') {
navigator.serviceWorker.ready.then(registration => {
registration.showNotification('Update', {
body: 'New content is available!',
icon: 'link-to-your-icon',
vibrate: [200, 100, 200, 100, 200, 100, 400],
tag: 'request',
actions: [ // you can customize these actions as you like
{
action: doSomething(), // you should define this
title: 'update'
},
{
action: doSomethingElse(), // you should define this
title: 'ignore'
}
]
})
})
}
})
}
showNotification()
}
Gatsby assumes a "decoupled" architecture. Gatsby wants to handle your frontend and the build process but how/where you store your data is up to you. So push notifications with Gatsby would be handled by a different service. You'd just need to add React code which handles the pushed data and presents it.
How i can merge 2 video streams into one on client side, and send it through WebRTC PeerConnection?
For example i have 2 video streams like this
navigator.getUserMedia({ video: true }, successCamera, error); // capture camera
function successCamera(streamCamera) {
vtCamera = streamCamera.getVideoTracks()[0]
navigator.getUserMedia({ // capture screen
video: {
mandatory: {
chromeMediaSource: 'screen',
maxWidth: 1280,
maxHeight: 720
}
}
}, successScreen, error);
function successScreen(streamScreen) {
vtScreen = streamScreen.getVideoTracks()[0]
mergedVideoTracks = vtScreen + vtCamera; // How i can merge tracks in one??
finallyStream = streamScreen.clone()
finallyStream.removeTrack( finallyStream.getVideoTracks()[0] )
finallyStream.addTrack( mergedVideoTracks )
finallyStream // I need to send this through WebRTC PeerConnection
}
}
function error(error) {
console.error(error);
}
As you can see i have vtScreen and vtCamera as MediaStreamTracks. I need to set screen as background, and camera as small frame in right bottom corner. And send it through WebRTC PeerConnection as one stream
Yes, i can merge it on canvas, but i don't know how i can send this canvas as MediaStreamTrack. =(