Can javascript MSE play segmented mp4 from the middle? - javascript

in my current project i have a video stream that ffmpeg encodes to a segmented mp4. that encoded data is piped into an application that sends that data to whomever connects to that application through a websocket. when a client connects i make sure to send the ftyp and the moov boxes first and then send the most recent segments recieved from ffmpeg.
on the client side i just pass all binary data from the websocket to MSE.
The problem i am facing is that this works if the client is connected from the very start and gets all the fragments that ffmpeg pipes out, but it does not work if the client connects in after ffmpeg sends its first few fragments.
My question is:
Is it possible for MSE to play a fragmented mp4 from the middle when it is also provided the init segments?
If it is possible then how would that need to be implemented?
if it isnt possible then what format would allow me to stream live video over a websocket?

Is it possible for MSE to play a fragmented mp4 from
the middle when it is also provided the init segments?
Yes, This is exactly what fragmented (segmented) mp4 was designed to do
If it is possible then how would that need to be implemented?
The way you describe your implementation is correct. Send init fragment, followed by most recent AV fragment. Which means you have a different problem or a bug in your implementation.

Related

Encoding raw h.264 data to browser via Dash

I have a live stream of raw h264 (no container) coming from a remote webcam. I wanna stream it live in browser using DASH. DASH requires creating mpd file (and segmentation). I found tools (such as mp4box) that accomplish that in static files, but i'm struggling to find a solution for live streams. any suggestions - preferably using node.js modules?
Threads i have checked:
mp4box - from one hand i saw this comment that states " You cannot feed MP4Box with some live content. You need to feed MP4Box -live with pre-segmented chunks." on the other hand there's a lot of people directing to this bitmovin tutorial which does implement a solution using mp4box. In the toturial they are using mp4box (which has a node.js api implementation) and x264 (which doesn't have node.js module? or is contained in ffmpeg/mp4box?)
ngnix - ngnix has a module that support streaming to DASH using rtmp. for exemple in this toturial. I prefer not to go this path - as mention i'm trying to do it all in node.js.
Although i read couple of posts with similar problem, I couldn't find a suitable solution. Help would be much appreciated!
The typical architecture is to send your live stream to a streaming server which will then do the heavy lifting to make the stream available to other devices, using streaming protocols such as HLS and DASH.
So the client devices connect to the server rather than to your browser.
This allows the video to be encoded and packaged to reach as many devices as possible with the server doing any transcoding necessary and potentially also creating different bit rate versions of your stream to allow for different network conditions, if you want to provide this level of service.
The typical structure is encoded stream (e.g. h.264 video), packaged into a container (e.g. mp4 fragmented) and delivered via a streaming protocol such as HLS or DASH.

RTSP stream: Possible ways to display in browser with everchanging stream-timestamp-depending overlay image?

The scenario is as follows:
I receive RTSP stream
Every second I receive a bunch of additional data corresponding to the exact timestamp of the stream
The stream needs to be displayed in the web browser. For each new portion of data new overlay image needs to be shown, in the exactly correct timestamp of the stream.
The stream needs to be saved for the VOD functionality.
It seems RTSP is not natively supported even in HTML5. I'm thinking about the following implementation:
Receive RTSP and data on .NET server
Each second compose a 1 second chunk of the video
Save this chunk into the DB along with the coresponding data
Send this chunk and the data to the web browser
Use JS to play this chunk and draw an overlay
I currently don't know how to implement #2 and #5. Are there any (C#?) libraries to discretize the stream? And, are there any JS libraries that may play chunks we compose at #2? I'm a complete newbie when it comes to video processing, so any directions would be appreciated.

VP8 video rendering in Javascript

I'm currently working on a interactive web application in javascript that renders in realtime a video received on a webpage and lets you send keyboard inputs.
The fact is that I can only receive VP8 video streams (not webm, just raw VP8 video without the Matroska container). I've managed to decode the video from the client side using dixie decoder (https://github.com/dominikhlbg/vp8-webm-javascript-decoder/), but the problem is that it adds buffering or something, because there is a lag of almost 2 seconds between when I receive a stream and I render it. Is there a way I can decode the stream natively? That would speed it the performance.
I thought of adding a matroska container to the vp8 received stream and sending it to the video tag, but I don't know how to create such container.
Ok, after days trying to figure out how to solve this I finally found the bug, which it wasn't in the Dixie decoder, but the server that needed a flag to stop buffering the video.

Is it possible to play this stream using HTML5/javascript?

Basically trying to play some live audio streams in an app I'm porting to the browser.
Stream example: http://kzzp-fm.akacast.akamaistream.net/7/877/19757/v1/auth.akacast.akamaistream.net/kzzp-fm/
I have tried HTML5 audio tag and jPlayer with no luck. I know next to nothing about streaming audio, however, when I examine the HTTP response header the specified content type is "audio/aacp" (not sure if that helps).
I'm hoping someone with more knowledge of audio formats could point me in the right direction here.
The problem isn't with AAC+ being playable, the issue is with decoding the streaming ACC wrapper called ADTS. The Audio Data Transport Stream [pdf] or "MP4-contained AAC streamed over HTTP using the SHOUTcast protocol" can be decoded and therefore played by only a couple media players (e.g., foobar2000, Winamp, and VLC).
I had the same issue while trying to work with the SHOUTcast API to get HTML5 Audio playback for all the listed stations. Unfortunately it doesn't look like there's anything that can be done from our perspective, only the browser vendors can decide to add support for ADTS decoding. It is a documented issue in Chrome/WebKit. There are 60+ people (including myself) following the issue, which is marked as "WontFix".

Live video broadcasting

I am going to develop a chat based application for mobile which allows video chat. I am using HTML5, javascript and PhoneGap. Using phoneGap, I am able to access mobile camera, capture a video, save the video and upload it in server. I have done it for android. But I need live broadcasting of the video. Is there any solution of that?
Note: It is not any android native app.
You didn't specify what facility you're currently using for the video capture. AFAIK, current WebView doesn't yet support WebRTC which is the w3 standard that will soon enable you to access the video frames in your HTML5 code. So I'm assuming you're using PhoneGap's navigator.device.capture.captureVideo facility.
On Android, captureVideo creates 3gp files. The problem with 3gp is that they cannot be streamed or played while capturing: the MOOV atom of the file is required for parsing the video frames in it, and it is written only after all frames in the file have been encoded. So you must stop the recording before you can make any use of the file.
Your best shot in HTML5 is to implement a loop that captures a short clip (3-5 seconds?) of video, then sends it to a server while the next chunk is being captured. The server will need to concatenate the clips to a single file that can be broadcast with a streaming server. This will add several seconds to the latency of the broadcast, and you are quite likely to suffer from lost frames at the point in the gap between two separate chunk captures. That might be sufficient for some use cases (security cameras, for example).
If your application is such that you cannot afford to lose frames, I see no other option but to implement the video capture and streaming in Java, as a PhoneGap Plugin.
See Spydroid http://code.google.com/p/spydroid-ipcamera/
It uses the solution with the special FileDescriptor you found. Basically they let the video encoder write a .mp4 with H.264 to the special file descriptor that calls your code on write. Then they strip off the MP4 header and turn the H.264 NALUs into RTP packets.

Categories

Resources