I'm trying to stream a video directly to a browser using node.js as the backend. I would like the video to be streamed from a specific time and would also like it to be partially streamed since it is a pretty large file. Right now I am doing this with fluent-ffmpeg, like this:
const ffmpeg = require('fluent-ffmpeg');
app.get('/clock/', (req, res) => {
const videoPath = 'video.mp4';
const now = new Date();
ffmpeg(videoPath)
.videoCodec('libx264')
.withAudioCodec('aac')
.setStartTime(`${(now.getHours() - 10) % 24}:${now.getMinutes() - 1}:${now.getSeconds()}`)
.format('mp4')
.outputOptions(['-frag_duration 100','-movflags frag_keyframe+faststart','-pix_fmt yuv420p'])
.on('end', () => {
console.log("File has been converted succesfully");
})
.on('error', (err) => {
if (err.message.toLowerCase().includes('output stream closed')) return;
console.log('An error occoured', err);
})
.pipe(res, { end: true });
});
This will work with Chrome, but Safari just doesn't want to stream it.
I know that the reason why it doesn't work on Safari is that Safari needs the range header. I've therefore tried to do that, but:
I can't get it to work with fluent-ffmpeg.
When I try to do it the "normal" way, without fluent-ffmpeg, it needs to load the whole video file before it plays.
The video doesn't need to start at the specific timestamp. It would be nice tho, but I have a workaround for that if it's not possible :)
So my question is: How can I get the code above to work with Safari. And if that is impossible: How can I code something that doesn't need to be loaded fully, before it can be played in Safari browsers, aka. partial video streaming.
Related
I have implemented the WebRTC in my angular project to record the video. And after the save we can send it to the attachment. This is working fine in windows OS properly, but in mac safari, the video is speed up and 30-sec video becomes 3 sec only. this occurs only in safari.
Here on start the video.
mediaDevices.getUserMedia({ video: true, audio: true })
.then(webcamStream => {
this.webcamStream = webcamStream;
})
The MediaRecorder code:
this.recorder = new MediaRecorder(this.webcamStream, {mimeType: 'video/mp4'});
this.recorder.onstart = () =>
this.zone.run(() => {
this.behaviorService.isRecording(true);
});
this.recorder.onstop = this.onRecorderStopped;
this.recorder.ondataavailable = (event) =>
this.zone.run(() => {
this.data = [...this.data, event.data];
});
this.recorder.start();
When video is stopped then it save in video/webm;codecs=h264 this mimeType.
I have also tried with video/mp4 but it also not working
Can I get any solution that works in both OS?
Safari is notoriously broken with respect to .getUserMedia() and the MediaRecorder class.
Can i get the any solution which works in both OS?
Not yet. Pester Apple. In the meantime use Chrome on MacOS: it works.
There may be some tricks to recommend to make this better. But you didn't show us your MediaRecorder code: that's where the stream is compressed.
I am using Electron to create a Windows application that creates a fullscreen transparent overlay window. The purpose of this overlay is to:
take a screenshot of the entire screen (not the overlay itself which is transparent, but the screen 'underneath'),
process this image by sending the image as a byte stream to my python server, and
draw some things on the overlay
I am getting stuck on the first step, which is the screenshot capturing process.
I tried option 1, which is to use capturePage():
this.electronService.remote.getCurrentWindow().webContents.capturePage()
.then((img: Electron.NativeImage) => { ... }
but this captures my overlay window only (and not the desktop screen). This will be a blank image which is useless to me.
Option 2 is to use desktopCapturer:
this.electronService.remote.desktopCapturer.getSources({types: ['screen']}).then(sources => {
for (const source of sources) {
if (source.name === 'Screen 1') {
try {
const mediaDevices = navigator.mediaDevices as any;
mediaDevices.getUserMedia({
audio: false,
video: { // this specification is only available for Chrome -> Electron runs on Chromium browser
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: source.id,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720
}
}
}).then((stream: MediaStream) => { // stream.getVideoTracks()[0] contains the video track I need
this.handleStream(stream);
});
} catch (e) {
}
}
}
});
The next step is where it becomes fuzzy for me. What do I do with the acquired MediaStream to get a bytestream from the screenshot out of it? I see plenty of examples how to display this stream on a webpage, but I wish to send it to my backend. This StackOverflow post mentions how to do it, but I am not getting it to work properly. This is how I implemented handleStream():
import * as MediaStreamRecorder from 'msr';
private handleStream(stream: MediaStream): void {
recorder.stop()
const recorder = new MediaStreamRecorder(stream);
recorder.ondataavailable = (blob: Blob) => { // I immediately get a blob, while the linked SO page got an event and had to get a blob through event.data
this.http.post<Result>('http://localhost:5050', blob);
};
// make data available event fire every one second
recorder.start(1000);
}
The blob is not being accepted by the Python server. Upon inspecting the contents of Blob, it's a video as I suspected. I verified this with the following code:
let url = URL.createObjectURL(blob);
window.open(url, '_blank')
which opens the blob in a new window. It displays a video of maybe half a second, but I want to have a static image. So how do I get a specific snapshot out of it? I'm also not sure if simply sending the Javascript blob format in the POST body will do for Python to be correctly interpret it. In Java it works by simply sending a byte[] of the image so I verified that the Python server implementation works as expected.
Any suggestions other than using the desktopCapturer are also fine. This implementation is capturing my mouse as well, which I rather not have. I must admit that I did not expect this feature to be so difficult to implement.
Here's how you take a desktop screenshot:
const { desktopCapturer } = require('electron')
document.getElementById('screenshot-button').addEventListener('click', () => { // The button which takes the screenshot
desktopCapturer.getSources({ types: ['screen'] })
.then( sources => {
document.getElementById('screenshot-image').src = sources[0].thumbnail.toDataURL() // The image to display the screenshot
})
})
Using 'screen' will take a screenshot of the entire desktop.
Using 'windows' will take a screenshot of only the application.
Also refer to these docs: https://www.electronjs.org/docs/api/desktop-capturer
desktopCapturer only takes videos. So you need to get a single frame from it. You can use html5 canvas for that. Here is an example:
https://ourcodeworld.com/articles/read/280/creating-screenshots-of-your-app-or-the-screen-in-electron-framework
Or, use some third party screenshot library available on npm. The one I found needs to have ImageMagick installed on linux, but maybe there are more, or you don't need to support linux. You'll need to do that in the main electron process in which you can do anything that you can do in node.
You can get each frame from taken video like this:
desktopCapturer.getSources({
types: ['window'], thumbnailSize: {
height: 768,
width: 1366
}
}).then(sources => {
for (let s in sources) {
const content = sources[s].thumbnail.toPNG()
console.log(content)
}
})
I want to capture video with the webcamera.
And there is the right decision:
window.onload = function () {
var video = document.getElementById('video');
var videoStreamUrl = false;
navigator.getUserMedia({video: true}, function (stream) {
videoStreamUrl = window.URL.createObjectURL(stream);
video.src = videoStreamUrl;
}, function () {
console.log('error');
});
};
but produces an error in the browser:
[Deprecation] URL.createObjectURL with media streams is deprecated and will be removed in M68, around July 2018. Please use HTMLMediaElement.srcObject instead. See https://www.chromestatus.com/features/5618491470118912 for more details.
how to use HTMLMediaElement.srcObject for my purposes ? Thanks for your time!
MediaElement.srcObject should allow Blobs, MediaSources and MediaStreams to be played in the MediaElement without the need to bind these sources in the memory for the lifetime of the document like blobURIs do.
(Currently no browser support anything else than MediaStream though...)
Indeed, when you do URL.createObjectURL(MediaStream), you are telling the browser that it should keep alive this Source until your revoke the blobURI, or until the document dies.
In the case of a LocalMediaStream served from a capturing device (camera or microphone), this also means that the browser has to keep the connection to this device open.
Firefox initiated the deprecation of this feature, one year or so ago, since srcObject can provide the same result in better ways, easier to handle for everyone, and hence Chrome seems to finally follow (not sure what's the specs status about this).
So to use it, simply do
MediaElement.srcObject = MediaStream;
Also note that the API you are using is itself deprecated (and not only in FF), and you shouldn't use it anymore. Indeed, the correct API to capture MediaStreams from user Media is the MediaDevices.getUserMedia one.
This API now returns a Promise which gets resolved to the MediaStream.
So a complete correction of your code would be
var video = document.getElementById('video');
navigator.mediaDevices.getUserMedia({
video: true
})
.then(function(stream) {
video.srcObject = stream;
})
.catch(function(error) {
console.log('error', error);
});
<video id="video"></video>
Or as a fiddle since StackSnippetsĀ® overprotected iframe may not deal well with gUM.
I am trying to analyse the audio output from the browser, but I don't want the getUserMedia prompt to appear (which asks for microphone permission).
The sound sources are SpeechSynthesis and an Mp3 file.
Here's my code:
return navigator.mediaDevices.getUserMedia({
audio: true
})
.then(stream => new Promise(resolve => {
const track = stream.getAudioTracks()[0];
this.mediaStream_.addTrack(track);
this._source = this.audioContext.createMediaStreamSource(this.mediaStream_);
this._source.connect(this.analyser);
this.draw(this);
}));
This code is working fine, but it's asking for permission to use the microphone! I a not interested at all in the microphone I only need to gauge the audio output. If I check all available devices:
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
devices.forEach(function(device) {
console.log(device.kind + ": " + device.label +
" id = " + device.deviceId);
});
})
I get a list of available devices in the browser, including 'audiooutput'.
So, is there a way to route the audio output in a media stream that can be then used inside 'createMediaStreamSource' function?
I have checked all the documentation for the audio API but could not find it.
Thanks for anyone that can help!
There are various ways to get a MediaStream which is originating from gUM, but you won't be able to catch all possible audio output...
But, for your mp3 file, if you read it through an MediaElement (<audio> or <video>), and if this file is served without breaking CORS, then you can use MediaElement.captureStream.
If you read it from WebAudioAPI, or if you target browsers that don't support captureStream, then you can use AudioContext.createMediaStreamDestination.
For SpeechSynthesis, unfortunately you will need gUM... and a Virtual Audio Device: first you would have to set your default output to the VAB_out, then route your VAB_out to VAB_in and finally grab VAB_in from gUM...
Not an easy nor universally doable task, moreover when IIRC SpeechSynthesis doesn't have any setSinkId method.
I have a script for playing a remote mp3 source through the Speaker module which is working fine. However if I want to stop playing the mp3 stream I am encountering two issues:
If I stop streaming the remote source, eg. by calling stream.pause() as in line 11 of the code below then stdout is flooded with a warning:
[../deps/mpg123/src/output/coreaudio.c:81] warning: Didn't have any audio data in callback (buffer underflow)
The warning in itself makes sense because I'm not providing it with any data anymore, but it is outputting it very frequently which is a big issue because I want to use it for CLI app.
If I attempt to end the speaker calling speaker.end() as in line 13 of the code below then I get the following error:
[1] 8950 illegal hardware instruction node index.js
I have been unable to find anything regarding this error besides some Raspberry Pi threads and I'm quite confused as to what is causing illegal hardware instruction.
How can I properly handle this issue? Can I pipe the buffer underflow warning to something like dev/null or is that a poor way of handling it? Or can I end / destroy the speaker instance in another way?
I'm running Node v7.2.0, npm v4.0.3 and Speaker v.0.3.0 on macOS v10.12.1
const request = require('request')
const lame = require('lame')
const Speaker = require('speaker')
var decoder = new lame.Decoder()
var speaker = new Speaker()
decoder.pipe(speaker)
var req = request.get(url_to_remote_mp3)
var stream = req.pipe(decoder)
setTimeout(() => {
stream.pause()
stream.unpipe()
speaker.end()
}, 1000)
I think the problem might be in cordaudio.c (guesswork here). It seems like the buffer:written gets large when you flush it. My best guess is that the usleep() time in write_coreaudio() is mis-calculated after the buffer is flushed. I think the program may be sleeping too long, so the buffer gets too big to play.
It is unclear to me why the usleep time is being calculated the way it is.
Anyway... this worked for me:
Change node_modules/speaker/deps/mpg123/src/output/coreaudio.c:258
- usleep( (FIFO_DURATION/2) * 1000000 );
+ usleep( (FIFO_DURATION/2) * 100000 );
Then in the node_modules/speaker/ directory, run:
node-gyp build
And you should be able to use:
const request = require('request')
const lame = require('lame')
const Speaker = require('speaker')
const url = 'http://aasr-stream.iha.dk:9870/dtr.mp3'
const decoder = new lame.Decoder()
let speaker = new Speaker()
decoder.pipe(speaker)
const req = request.get(url)
var stream = req.pipe(decoder)
setTimeout(() => {
console.log('Closing Speaker')
speaker.close()
}, 2000)
setTimeout(() => {
console.log('Closing Node')
}, 4000)
... works for me (no errors, stops when expected).
I hope it works for you too.
You will want to use github issues for that module and the library it uses to make sure no one else already solved the problem and if not open a bug describing it so other module users can reference it since that is the first place they will look.
A "good" way of handling it will probably involve modifying the native part of the speaker module unless you just have a sound library version mismatch.