RecordRTC: record in mp4 container instead of matroska - javascript

I'm using RecordRTC library to record screen and audio. I need h264 codec in mp4 container, but with my settings I get h264 codec in matroska container. I'm using the following code:
this.captureUserMedia(screenConstraints, function(screenStream) {
that.captureUserMedia(audioConstraints, function(audioStream) {
var arrOfStreams = [screenStream, audioStream];
var options = {
type: 'video',
mimeType: 'video/webm\;codecs=h264', // or video/webm\;codecs=h264 or video/webm\;codecs=vp9
audioBitsPerSecond: 192000,
recorderType: MultiStreamRecorder,
video: {
width: desiredWidth,
height: desiredWidth / screenAspectRatio
}
};
that.recorder = RecordRTC(arrOfStreams, options);
that.recorder.startRecording();
that.btnStopRecording.onclick = function () {
console.log("recording stopped");
that.recorder.stopRecording(that.postFiles.bind(that));
}
});
});
}
Is it possible with RecordRTC library? Here is issue on project's github, but they recommend the options I have used. Is the only way to use ffmpeg to repackage from matroska to mp4?

Related

Web audio echoCancellation with multiple sources does not work

I have an application that plays multiple web audio sources concurrently, and allows the user to record audio at the same time. It works fine if the physical input (e.g. webcam) cannot detect the physical output (e.g. headphones). But if the output can bleed into the input (e.g. using laptop speakers with a webcam), then the recording picks up the other audio sources.
My understanding is the echoCancellation constraint is supposed to address this problem, but it doesn't seem to work when multiple sources are involved.
I've included a simple example to reproduce the issue. JSfiddle seems to be too strictly sandboxed to allow user media otherwise I'd dump it somewhere.
Steps to reproduce
Press record
Make a noise, or just observe. The "metronome" should beep 5 times
After 2 seconds, the <audio> element source will be set to the recorded audio data
Play the <audio> element - you will hear the "metronome" beep. Ideally, the metronome beep would be "cancelled" via the echoCancellation constraint which is set on the MediaStream, but it doesn't work this way.
index.html
<!DOCTYPE html>
<html lang="en">
<body>
<button onclick="init()">record</button>
<audio id="audio" controls="true"></audio>
<script src="demo.js"></script>
</body>
</html>
demo.js
let audioContext
let stream
async function init() {
audioContext = new AudioContext()
stream = await navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: true,
},
video: false,
})
playMetronome()
record()
}
function playMetronome(i = 0) {
if (i > 4) {
return
}
const osc = new OscillatorNode(audioContext, {
frequency: 440,
type: 'sine',
})
osc.connect(audioContext.destination)
osc.start()
osc.stop(audioContext.currentTime + 0.1)
setTimeout(() => {
playMetronome(i + 1)
}, 500)
}
function record() {
const recorder = new MediaRecorder(stream)
const data = []
recorder.addEventListener('dataavailable', (e) => {
console.log({ event: 'dataavailable', e })
data.push(e.data)
})
recorder.addEventListener('stop', (e) => {
console.log({ event: 'stop', e })
const blob = new Blob(data, { type: 'audio/ogg; codecs=opus' })
const audioURL = window.URL.createObjectURL(blob)
document.getElementById('audio').src = audioURL
})
recorder.start()
setTimeout(() => {
recorder.stop()
}, 2000)
}
Unfortunately this is a long standing issue in Chrome (and all its derivatives). It should work in Firefox and Safari.
Here is the ticket: https://bugs.chromium.org/p/chromium/issues/detail?id=687574.
It basically says that the echo cancellation only works for audio that is coming from a peer connection. As soon as it is processed locally by the Web Audio API it will not be considered anymore by the echo cancellation.

webRTC recording tab screen with tab audio not wroking

I used webRTC, node js, and react to build a fully functional video conferencing app that can support up to 4 users and uses mesh architecture. After that, I wanted to add a record meeting feature, so I added it. However, it only records my own audio from my microphone and remote stream audio is not recorded in the media recorder. Why is that?
here is a simple code snippet that shows how I get my tab screen stream
const toBeRecordedStream = await navigator.mediaDevices.getDisplayMedia({
video: {
width: 1920,
height: 1080,
frameRate: {
max:30,
ideal: 24,
},
},
audio: true,
});
After receiving the tab stream, I used audio context to combine the tab audio with my microphone audio and record it.
const vp9Codec = "video/webm;codecs=vp9,opus";
const vp9Options = {
mimeType: vp9Codec,
};
const audioCtx = new AudioContext();
const outputStream = new MediaStream();
const micStream = audioCtx.createMediaStreamSource(localStream);
const screenAudio = audioCtx.createMediaStreamSource(screenStream);
const destination = audioCtx.createMediaStreamDestination();
screenAudio.connect(destination);
micStream.connect(destination);
outputStream.addTrack(screenStream.getVideoTracks()[0]);
outputStream.addTrack(destination.stream.getAudioTracks()[0]);
if (MediaRecorder.isTypeSupported(vp9Codec)) {
mediaRecorder = new MediaRecorder(outputStream, vp9Options);
} else {
mediaRecorder = new MediaRecorder(outputStream);
}
mediaRecorder.ondataavailable = handelDataAvailable;
mediaRecorder.start();
Four video and audio streams are visible on the screen, but only my voice and the video of the tab are recorded
and I am working with the Chrome browser because I am aware that Firefox does not support tab audio, but Chrome and Edge do.

HTML5 video tag not working on iOS's browser (iPhone)

Context
I am trying to play a video in edge browser.
The project framework is
.NET 6 MVC and VUE for client side
.NET 6 WebApi for server side
the client's Vue component will send request with range header (1MB) to request fragmented MP4, and use media Source extension(MSE) to append arrayBuffer to the blobUrl that point to video.src.
like this
var mediaSource = new MediaSource;
mediaSource.addEventListener('sourceopen', sourceOpen);
video.src = URL.createObjectURL(mediaSource);
This works fine in window's EDGE, BUT didn't work on iPhone (tested by iPhone SE's edge browser)
the video tag didn't work, it only show a blank page.
image from iPhone SE (EDGE Version 100.0.1185.50)
Works fine on window's EDGE Version 100.0.1185.50
image from window 10
Tried Solution
I had tried add playsinline prop to video tag and other solutions in HTML5 Video tag not working in Safari , iPhone and iPad, but still didn't work.
Code Snippet
The Vue Component's method is as bellow:
/*
* check the videoId is equal to course or not.
* if not, fetch new video stream, and create blob url to this.videoUrl
*/
async displayVideo() {
if (this.videoId != this.course) {
//common.loader.show("#255AC4", 1.5, "Loading...");
this.videoId = this.course;
let video = document.querySelector("video");
let assetURL = FrontEndUrl + `/Stream/${this.videoId}`;
let mimeCodec = 'video/mp4; codecs="avc1.42E01E, mp4a.40.2"';
let sourceBuffer;
let chunkSize;
let contentRange;
let loop;
let index;
const token = common.cookieManager.getCookieValue(`signin`);
if ('MediaSource' in window && MediaSource.isTypeSupported(mimeCodec)) {
let mediaSource = new MediaSource;
mediaSource.addEventListener('sourceopen', sourceOpen);
video.src = URL.createObjectURL(mediaSource);
} else {
console.error('Unsupported MIME type or codec: ', mimeCodec);
}
function sourceOpen() {
// chunkSize set to 1MB
chunkSize = 1024 * 1024 * 1;
// index set to 1 because the fetchNextSegment start from second fragment
index = 1;
// 取出 mediaSource 並且把sourceBuffer加上 updateend event
let mediaSoruce = this;
sourceBuffer = mediaSoruce.addSourceBuffer(mimeCodec);
sourceBuffer.addEventListener("updateend", fetchNextSegment);
fetchArrayBuffer(assetURL, chunkSize * 0, appendBuffer);
// 📌 這裡會報一個 DOM interact 的 錯誤
//video.play();
}
function fetchNextSegment() {
if (index > loop) {
sourceBuffer.removeEventListener("updateend", fetchNextSegment);
return;
}
fetchArrayBuffer(assetURL, chunkSize * index, appendBuffer);
index++;
}
function fetchArrayBuffer(url, range, appendBuffer) {
fetch(url, {
headers: {
"Range": `bytes=${range}-`,
"Authorization": `Bearer ${token}`,
}
})
.then(response => {
if (!loop) {
contentRange = response.headers.get("x-content-range-total");
loop = Math.floor(contentRange / chunkSize);
}
return response.arrayBuffer();
})
.then(data => { appendBuffer(data) });
}
function appendBuffer(videoChunk) {
if (videoChunk) {
sourceBuffer.appendBuffer(videoChunk);
}
}
// 📌 這是原本舊的方式,此方式無法附上 JWT token
// this.videoUrl = await this.fetchVideoStream(this.videoId); //FrontEndUrl + `/Stream/${this.videoId}`;
//common.loader.close();
}
},
And My Vue's template
<p-card>
<template #header>
</template>
<template #title>
<h2>課程名: {{courseName}}</h2>
</template>
<template #subtitle>
</template>
<template #content>
<video id="video" style="width: 100%; height: auto; overflow: hidden;" :key="videoUrl" v-on:loadeddata="setCurrentTime" v-on:playing="playing" v-on:pause="pause" controls autoplay playsinline>
</video>
</template>
<template #footer>
版權所有.轉載必究
</template>
</p-card>
I also had logs in machine, but the logs was normal.
Does anyone know why it can's work on the iOS's EDGE? Thanks
Summary
After checked at Can I use MSE, found the reason is the IOS of iPhone does not support MSE (Media Source Extension). So the MSE didn't work.
an image from Can I use MSE
Solution
The iPhone's iOS support HLS originally (apple dev docs), so you need to convert the MP4 to HLS format.(can use bento4 HLS)
📌 MP4 format must be fMP4 format
After converted, the HLS format output directory will look like this
The filename extension should be .m3u8, and the master.m3u8 is a doument that describe the video's all information.
Then let the video tag's src attribute point to the HLS resource's URL(master.m3u8).
like this sample code
<video src="https://XXX/master.m3u8">
</video>
Also can use the videoJS library, the srcObject's type is "application/x-mpegURL"
var player = videojs("videoId");
if (this.iOS()) {
// ⚠ 驗證會失敗 還不清楚原因,有確定後端 驗證 是正常的
//const token = common.cookieManager.getCookieValue(`signin`);
url = FrontEndUrl + `/Media/Video/${this.videoId}/master.m3u8`;
srcObject = { src: url, type: "application/x-mpegURL" }
}
try {
player.src(srcObject);
player.play();
} catch (e) {
alert(e.message);
console.log(e);
}
a simple HLS demo on github page
But this demo use the hls.js not video.js
<iframe width="auto" src="https://s780609.github.io/hlsJsDemo/">
</iframe>
Reference of bento4
Convert to HLS format command example
📌 -f: force to replace the old file
     -o: output directory
mp4hls -f -o [output directory] [source mp4 file]

recording canvas animation playback issue with chromium browsers

If i use the following code to record a canvas animation:
streamInput = parent.document.getElementById('whiteboard');
stream = streamInput.captureStream();
const recorder = RecordRTC(stream, {
// audio, video, canvas, gif
type: 'video',
mimeType: 'video/webm',
recorderType: MediaStreamRecorder,
disableLogs: false,
timeSlice: 1000,
ondataavailable: function(blob) {},
onTimeStamp: function(timestamp) {},
bitsPerSecond: 3000000,
frameInterval: 90,
frameRate: 60,
bitrate: 3000000,
});
recorder.stopRecording(function() {
getSeekableBlob(recorder.getBlob(), function(seekableBlob) {
url = URL.createObjectURL(recorder.getBlob());
$("#exportedvideo").attr("src", url);
$("#exportedvideo").attr("controls", true);
$("#exportedvideo").attr("autoplay", true);
})
});
The video plays fine and i can seek it in chrome/edge/firefox etc.
When i download the video using the following code:
getSeekableBlob(recorder.getBlob(), function(seekableBlob) {
var file = new File([seekableBlob], "test.webm", {
type: 'video/webm'
});
invokeSaveAsDialog(file, file.name);
}
The video downloads and plays fine, and the seekbar updates like normal.
If i then move the seekbar to any position, as soon as I move it I get a media player message:
Can't play,
Can't play because the item's file format isnt supported. Check store to see if this item is available here.
0xc00d3e8c
If i use firefox and download the file, it plays perfect and I can seek.
Do i need to do anything else to fix the Chromium webm?
i've tried using the following code to download the file:
var file = new File([recorder.getBlob()], "test.webm", {
type: 'video/webm'
});
invokeSaveAsDialog(file, file.name);
however, the file plays and i can move the seekbar but the video screen is black.
yet firefox works fine.
Here are the outputted video files:
First set were created without ts-ebml intervention:
1: https://lnk-mi.app/uploads/chrome.webm
2: https://lnk-mi.app/uploads/firefox.webm
Second set were created using ts-ebml:
1: https://lnk-mi.app/uploads/chrome-ts-ebm.webm
2: https://lnk-mi.app/uploads/firefox-ts-ebml.webm
both were created exactly the same way using ts-ebml.js to write the meta-data
recorder.addEventListener("dataavailable", async(e) => {
try {
const makeMediaRecorderBlobSeekable = await injectMetadata(e.data);
data.push(await new Response(makeMediaRecorderBlobSeekable).arrayBuffer());
blobData = await new Blob(data, { type: supportedType });
} catch (e) {
console.error(e);
console.trace();
}
});
is there a step I am missing?
Having tried all the plugins like ts-ebml and web-writer, I found the only reliable solution was to upload the video to my server and use ffmpeg with the following command
ffmpeg -i {$srcFile} -c copy -crf 20 -f mp4 {$destFile}
to convert the video to mp4.

How to stop chrome from capturing a tab?

I'm trying to build an chrome extension similar to the chromecast one. I am using chrome.tabCapture to successfully start a audio/video stream. How do I stop the screen capture? I want to have a stop button, but I am not sure what to call to stop it. I can stop the LocalMediaStream, but the tab is still capturing and does not allow me to start a new capture without closing the tab. Any suggestions or maybe an api page I may have missed?
Try stream.getVideoTracks()[0].stop(); to stop the screen capture. And to record the stream caught using chrome.tabCapture API , you could use RecordRTC.js
var video_constraints = {
mandatory: {
chromeMediaSource: 'tab'
}
};
var constraints = {
audio: false,
video: true,
videoConstraints: video_constraints
};
chrome.tabCapture.capture(constraints, function(stream) {
console.log(stream)
var options = {
type: 'video',
mimeType : 'video/webm',
// minimum time between pushing frames to Whammy (in milliseconds)
frameInterval: 20,
video: {
width: 1280,
height: 720
},
canvas: {
width: 1280,
height: 720
}
};
var recordRTC = new RecordRTC(stream, options);
recordRTC.startRecording();
setTimeout(function(){
recordRTC.stopRecording(function(videoURL) {
stream.getVideoTracks()[0].stop();
recordRTC.save();
});
},10*1000);
});
I hope the above code snippet would help you :)
Edit 1: corrected the RecordRTC initialization.

Categories

Resources