Is it possible to live change video stream with canvas-node?
A'm using binaryjs to send binary frames over web socket and I want to change video live in node-canvas.
This is the code a'm using Webcam Binary.JS Demo
and this is the library that I want to use to change video with canvas node-canvas
Is that possible? If that is not possible what is the best solution to combine multiple video streams (with nodejs) to one video?
Related
I'm making a posture analyser using TensorFlow Js and PoseNet. I've made up the code to analyse postures in real time from Webcam. How do I input a local video file to be analysed?
This is the code that loads in Webcam stream. I'm looking for a way to replace this webcam video input with local video file.
canvas.parent('videoContainer');
video = createCapture(VIDEO);
video.size(width, height);
To do so, one can use the tag video event listeners to get the video frames at a framerate chosen as indicated here. This is done completely client side.
The other options would be to do it server side. Using libraries such as ffmpeg, one can convert a video to images as used here
The situation is pretty straight-forward; I am receiving a stream of NAL units via WebSockets. How do I feed them into an HTML5 video tag using MSE?
Research indicates that I should mux the data into a fragmented mp4, but I haven't found any specifics on how to accomplish that. Does anyone have specifics?
If you receive a stream data e.g. hls, nalu h.264...and so on, you can transform and mux that into a fragmented mp4. Setting HTML5 video tag combines with MSE like creating mediaSource, mediaSource.addSourceBuffer, sourceBuffer.appendBuffer. That will play video while fmp4 right feed into buffer.
You may check out https://github.com/ChihChengYang/wfs.js which demonstrates transmuxing NALu h.264 streams from websocket. That works directly on top of a standard HTML5 element and MSE.
I am using WebRTC for peer-to-peer video communication, and I would like to apply video filters to local webcam video before sending it to a remote peer.
The approach that I am considering is to send the local webcam video to a canvas element, where I will apply javascript filters to the video. Then I would like to stream the video from the canvas element to the peer using WebRTC. However, it is not clear to me if this is possible.
Is it possible to stream video from a canvas element using WebRTC? If so, how can this be done? Alternatively, are there any other approaches that I might consider to accomplish my objective?
It's April 2020; you can achieve this with the canvas.captureStream() method.
There is an excellent article on how to use it, along with several demos on github. See the following links:
Capture Stream
Stream from a canvas element to peer connection
So, basically, you can apply all the transformations on the canvas and stream from the canvas to remote peer.
my solution would be, send the normal stream to the peer, also transmit, how it has to be modified, so on the other side, instead of showing in a video element directly( play the video n hide the element), you would keep drawing in a canvas( after processing) with settimeout/requestAnimationFrame.
mozCaptureStreamUntilEnded is supported on firefox but resulting stream can't be attached to peer connection.
Playing over <canvas> is easier however streaming media from a <video> element requires Media Processing API (capture-stream-until-ended) along with RTCPeerConnection (with all features support).
We can get images from <canvas> however I'm not sure if we can generate MediaStream from <canvas>.
So, mozCaptureStreamUntilEnded is useful only wth pre-recorded media streaming.
jplayer supports mp4. But, I have a server that streams a raw h.264 video. Is it possible to stream it directly on the client side using jPlayer? If yes, please tell me how I should do it.
If no, how do I put the video into an mp4 container?
Or, is there any other JS library or jQuery plugin that can be used to display the h.264 stream?
You be better off putting your video file into a container.
If your video stream is already recorded then mp4 is a good choice.
You can wrap your video stream using ffmpeg or may be mp4box.
For playback in the browser you can use the html video tag or your jPlayer.
If you are live streaming - wrap your stream into mpeg dash and use Dash JS for playback.
there is a sample how to generate MPEG-DASH content using open source tools like x264 or MP4Box: http://www.dash-player.com/blog/2014/11/mpeg-dash-content-generation-using-mp4box-and-x264/
I am going to develop a chat based application for mobile which allows video chat. I am using HTML5, javascript and PhoneGap. Using phoneGap, I am able to access mobile camera, capture a video, save the video and upload it in server. I have done it for android. But I need live broadcasting of the video. Is there any solution of that?
Note: It is not any android native app.
You didn't specify what facility you're currently using for the video capture. AFAIK, current WebView doesn't yet support WebRTC which is the w3 standard that will soon enable you to access the video frames in your HTML5 code. So I'm assuming you're using PhoneGap's navigator.device.capture.captureVideo facility.
On Android, captureVideo creates 3gp files. The problem with 3gp is that they cannot be streamed or played while capturing: the MOOV atom of the file is required for parsing the video frames in it, and it is written only after all frames in the file have been encoded. So you must stop the recording before you can make any use of the file.
Your best shot in HTML5 is to implement a loop that captures a short clip (3-5 seconds?) of video, then sends it to a server while the next chunk is being captured. The server will need to concatenate the clips to a single file that can be broadcast with a streaming server. This will add several seconds to the latency of the broadcast, and you are quite likely to suffer from lost frames at the point in the gap between two separate chunk captures. That might be sufficient for some use cases (security cameras, for example).
If your application is such that you cannot afford to lose frames, I see no other option but to implement the video capture and streaming in Java, as a PhoneGap Plugin.
See Spydroid http://code.google.com/p/spydroid-ipcamera/
It uses the solution with the special FileDescriptor you found. Basically they let the video encoder write a .mp4 with H.264 to the special file descriptor that calls your code on write. Then they strip off the MP4 header and turn the H.264 NALUs into RTP packets.