Media Source Extensions append after reload page - javascript

I am writing a webinar platform and using MediaRecorder on the presenter’s speaker client and Media Source Extensions on the listener client. The initial byte segment contains all the information about the video, and the subsequent ones contain only the timestamp (https://www.w3.org/TR/media-source/#init-segment). I ensured that the video was sent without failures from the first client to the second client. But when I refresh the page on the listener's client, the media stream immediately stops, because no start segment. Can someone tell me how to solve this problem?

You need to segment the stream yourself.
If you're using WebM, just keep everything up to the start of the first Cluster and treat this as the initialization segment. Then, you can pick up anywhere in the stream at the beginning of a cluster that has a keyframe.
Unfortunately for you, you don't get to tell the browser where to insert keyframes when you're recording with MediaRecorder. So, you'll either have to determine which clusters have keyframes yourself, or do some server-side transcoding. The latter was likely required anyway, unless you were planning to serve the same bitrate/encoding to all clients.

Related

Crop resolution and trim video length in JS client side before uploading?

I wanna enable users of my web app to upload videos with a maximum lenght of 10s and cropped/scaled to a certain resolution. There for users should be able to trim there selected video to 10s before uploading it with a simple editor.
Is there any library or examples enabling client side video editting to cut video length as well as croping before uploading it to a server? I found some canvas approaches for filters, single video frames and export to webM videos but nothing bringing it all together. So anyone done that before?
Apreciate any ideas :)
Tipicaly video processing is a server thing because it's easier to find and run complex (often compiled) libraries (like ffmpeg) there than in browser and it may cause less performance problems for end user. Anyway I think there are two options:
1. Process video on server - send file and configuration
The first approach assume that you prepare client side "editor" based on canvas which simulate video editing. After setup of all filters, crops etc. the client might send original video file and video processing configuration which would be used on a server to do the same thing
Depends on which language you prefer on backend side implementation might be different so I don't give you ready snippet of code.
Of course you can switch order of tasks and upload original file at first place then smiulate video processing on client side and after all send mentioned configuration to backend and process video.
2. Process video within WebAssembly
If you really need to keep everything on client side you can try with WebAssembly ports of libraries like https://github.com/ffmpegwasm/ffmpeg.wasm and send already processed video file to server.

How to send a MediaStream between two iframes

I have two iframe elements loaded with each their document, both of the same origin domain.
The document loaded in the first frame obtains some media stream (using getUserMedia) and will attach the stream to a player.
The document loaded in the second frame also has a player and I want to re-use the same media stream for this player as well.
Searching for a solution I came across the RTCPeerConnection class article at Mozilla Developer Network and then some examples of using it.
But it looks really heavy for a simple use case like mine. I just want to share the stream between two frames in the same browser and on the same computer.
Is what I have found the only way to achieve this?
If so is there any way to improve the performance (less CPU usage)?
Or is there another way to achieve the above mentioned use case?
I can't quote a relevant Web standard publication at the moment, to assert the behaviour I am about to describe is "standardised", but I have empirically verified it (playing back of a media stream in another frame) is trivial to accomplish with Google Chrome, Microsoft Edge or Mozilla Firefox, if the frames/documents share the same origin.
Being how these are the most popular user agents, especially for applications that depend on the MediaStream class, their capability should suffice for your application, I presume.
The crux of the solution is that the aforementioned user agents do not distinguish between frames of the same origin with regard to playing back a media stream.
Meaning that yes, if you can use the following code in your application, assuming player refers to some HTMLMediaElement and media_source to some MediaStream object:
player.srcObject = media_stream;
...then the code will work "from another frame" as well (provided that other frame is of the same origin, of course).
There is no special case that you have to address. To play back the same media stream in multiple documents/frames, you can (and should, methodically) be assigning the same media stream object to the srcObject property of some media element that is part of any one of the documents, as long as the documents share the same origin.
The performance should arguably be optimal, since the media stream is one and the same and is thus "shared" by all media playback elements. You are not duplicating the stream, after all.
I am certain the proposed solution becomes invalid when you attempt to play back a media stream created in context of one origin, with a media playback element that is associated with another origin. You may be able to duplicate the media stream by copying its data segments, blob by blob or source buffer by source buffer, perhaps, using message passing that assumes both frames cooperate on either end of the communication channel (through postMessage), but that will definitely not be performance optimal, I'd imagine, if at all possible.

JavaScript Media Source Extensions - Appending after the initialization segment is not working

I have fully working streaming web app using getUserMedia with Media Recorder on the one side and Media Source Extensions on the other side. Transfer is realized through WebSockets.
The only but significant problem happens when viewer reloads the page, because Media Source need initialization segment first (only the first chunk contains it) to be able to append chunks from the middle of the stream.
So I handle the first stream chunk on the server side and get the initialization segment from it. It looks like this:
I guess it looks correct, doesnt it?
When I append it to the buffer, all seems to be fine. Buffers ReadyState is "open". But now, when I append some stream chunk from the middle of the stream, ReadyState changes to "ended" and nothing is played.
I am absolutely lost about what can I do to make it work. Could anyone help me, please?

Video.js download chunk instead of the whole video

I'm using VideoJs to play various videos. Some bigger than others.
Here's a simple scenario. A video starts playing that has 100mb length in total with a duration of 10 minutes. If the user skips to minute 2 then a call will be made to the backend to server the whole remaining video.
That's not good as far as user experience goes.The download time can be quite big and the player will be stuck in loading until it's finished.
Ideally what I'd want for it to do is download in chunks of 5-10 seconds.
Honestly javascript isn't my strong point so I don't really know where to being in doing that.
The backend accepts byte ranges. And I also have a Varnish.
Also I'm not opposed to using another video player if the one I'm currently using is not ok or for some reason doesn't support what I'm looking for.
Any pointing in the right direction is greatly appreciated.
For anybody who comes across this question and has the same problem:
https://info.varnish-software.com/blog/caching-partial-objects-varnish
Also make sure that varnish forwards the Range header.
This is quite possibly an issue with your file or server configuration, and not necessarily VideoJS. When you want users to be able to seek beyond the current buffer, you're usually talking about psueudo streaming.
To do this, your server must:
Support byte-range requests (you indicated that your back-end does support this)
Return the correct content-type header
Since you stated your server does support byte-range requests, I'd double check the content-type header.
Also, if you are using H.264 MP4 files, you might need to optimize them for streaming by moving the metadata (MOOV atom) to the beginning of the file. Some video encoders also refer to this as "fast start". A standalone application that can do this to already encoded MP4s is qtfaststart.
Otherwise, VideoJS should support seeking automatically. You can find a number of examples of them on JSFiddle.
You can also try to seek programmatically to see if that behaves any differently:
let player = VideoJS.setup("video");
player.play();
player.currentTime(340); // time to seek to

How to prefetch images into array and display them in browser in infinite loop

Can any one guide me on how to achieve this.. I am listing them in pointers..
A linux binary captures frames from the locally attached webcamera
and stores them in a folder. This is a continuous process. The
images are stored numerically.
I have a webserver which gives a output of the latest images received from the webcamera. This is a PHP file which gets the recent most image received and prints out.
What I have now is a javascript which refreshes the image every second and displays in the img tag.
Though it works the output is slow and updates slowly one frame at a time.
I am trying to display the images quickly and in a way it should
look like a mjpeg movie being played (not that it has to be so good
as I learned from the forums that the http does have its overhead)
<script type="text/javascript">
function refresh(){
document.images["pic1"].src="/latimage.php?camid=$selectedcamid&ref=" + new Date();
setTimeout('refresh()', 1000);}
if(document.images)window.onload=refresh;
</script>
<img src='/latimage.php?camid=$selectedcamid' id='pic1'>
Above code works perfect. But my unsatisfied mind wants to display the frames obtained from the webcam quickly..like displaying atleast 3 to 4 frames per second.
As I understood from my searches so far it is not too feasible to do
the refresh act too quickly as the HTTP process does take time.
I am trying to find some details on getting this done using a method
by which I can prefetch 100 frames into a image array (I would call
it buffering) and start displaying one image at a time at the rate
of 3 images / second.
Whiles displaying the images the older images should be removed from
the array and the latest ones fetched should be inserted in the end.
Thus the looping is infinite.
I am sorry for asking too many questions..I am unable to find any proper direction to start off with. I can do the above in .net windows application quite easily but in web browser I am unable to get any ideas. I am not sure if jQuery image array or json or simple javascript would do.
I need some guidance please..
If you need to capture the camera output to disk, then I suggest capturing the camera output as video (at 3 FPS) and then streaming that video file to your browser using WebSockets. Here is an example of doing that. If you are willing to run nginx on your server then live_thumb is a complete solution that captures and streams video via WebSockets.
On the other hand, if your goal is just to view the output of the camera and you don't need to store the video, you could consider using WebRTC and running a browser at both ends and then just hooking up the media stream. In other words one browser (perhaps a headless variant) would run on the system with your camera and would stream the video to your other browser using WebRTC. With WebRTC you could get much higher frame rates and your bandwidth would probably still be significantly lower than sending individual images at a slow frame rate.

Categories

Resources