Video does not play through even if enough content has been appended - javascript

I have a setup where I send a 10min long video (Elephants Dream) using the websockets protocol chunked in short segments of 4s each.
I use the browser as client, with the Websocket API to receive the content and the HTML5 Video Tag as player, to which I append the chunks as they come to the video using Media Source Extensions.
The thing is that there seems to be a limit somewhere (max receive buffer size, max mediasource sourcebuffer buffer size, max buffered content on video element, etc) so that the video does not play correctly to the end but stops earlier even if there is enough data.
All of the segments are arriving correctly and get appended in time. At the same time, the video starts playing back from the beginning.
You can see the grey line on the player showing buffered video grow until at some point in time where it stops growing and the video stops playing when getting to this position.
However, the full video has been appended to the mediasource element, regarding to the output messages, and which can also be tested by manually jumping to another position in future or past. It looks like there is always just a fraction of the content "loaded".
Since I'm testing it on localhost the throughput is very high so I tried lowering this to more common values (still good over video bitrate) to see if I'm overloading the client but this did not change anything.
Also tried different segment sizes, with exact same results, except for that the time in point where it stops is a different one.
Any idea on where this limitation can be or what may be happening?

I think you have a gap in the buffered data. Browsers have a limited buffer size to which you can append. When that limit is reached, if you append additional data, the browser will silently free some space by discarding some frames it does not need from the buffer. In my experience, if you happen too fast, you may end up with gaps in your buffer. You should monitor the status of the buffered attribute when appending to see if there is any gap.

Are you changing representations right before it stops? When you change representations, you need to append the init segment for the new representation before you append the next segment of the new representation.

Related

Is it possible to use canvas to play a video,but through animating the pixels?

So the thing is that ,I want to play a video on a website that does not allow me to play any videos or show any photo's.
Therefore i come across an idea to allow the client side script to download one picture from the server, so that we can avoid tainting the canvas.
After that ,the server will extract pixel data from the first frame of the video,and then send that data to the client side script to be processed and replace the default pixel data that is already on the canvas with the one that has been received.
Then for efficiency ,the server will compare the first frame with the second, and then if for example the first frame consists of a pixel with the same color and position as the second frame then the server will only send the pixels that need to be changed,so this means that the server will only send the pixel data that is needed.
This process will be automatic until the end of a video.
so my full question is that ,is this possible ,and if 'yes' ,tell me if that will slow down the user's device or not, and if it will , is there a way to improve efficiency?
That might be possible indeed, and an interesting theoretical subject on video frame manipulation.
But in practice, it has a taste of re-inventing the wheel. The following behaviour is a basic on many video compression format :
the server [or the file format] will only send the pixels that need to be changed
In order to be efficient, that can be combined with a motion vector showing how the constant part has moved from the previous frame, in a Predicted frame.
And yes, it will most probably slow down the user interface, as it would never get the efficiency of a real video stream specifically designed for this purpose.

SourceBuffer.remove(start, end) removes whole buffered TimeRange (How to handle realtime stream with MSE?)

I have a SourceBuffer with a single entry in .buffered. I have a realtime stream of raw h.264 data arriving which I encode into mp4 and push into the SourceBuffer with .appendBuffer(data). Since this is a realtime stream of data I need to keep clearing the buffer however this is where I encounter my problem. (Ie. I encounter a QuotaExceededError)
For examples sake my single entry in SourceBuffer.buffered has a timerange of 0-10 seconds. My attempt to tidy the buffer is to call SourceBuffer.remove(0, 8). My expectation is that my buffer would be cleared and I'd be left with a timerange of 8-10. However the entire timerange (my only range) is removed and from this point all further appendBuffer calls seem to do nothing.
Three questions relevant to this issue:
How do I a) stop .remove from having this behaviour or b) force new time-ranges in my buffer so that only "old" ranges are removed.
Why do the later appendBuffer calls do nothing? I would expect them to re-populate the SourceBuffer.
Is there a better "MSE" way to handle a realtime stream where I never care about going back in time? Ie. All rendered data can be thrown away.
In case there's some weird Browser/Platform issue going on I'm using Chrome on Ubuntu.
Also, I am basing my code off of https://github.com/xevokk/h264-converter.
It's all in the MSE spec.
http://w3c.github.io/media-source/#sourcebuffer-coded-frame-removal
Step 3.3: Remove all media data, from this track buffer, that contain starting timestamps greater than or equal to start and less than the remove end timestamp.
So the user agent will remove all the data you've requested, from 0 to 8s
Then
Step 3.4: Remove all possible decoding dependencies on the coded frames removed in the previous step by removing all coded frames from this track buffer between those frames removed in the previous step and the next random access point after those removed frames.
The user agent will remove all frames that depend on the ones you've just removed. Due to the way h264 works (and all modern video codec) that is all frames following the last keyframe until the next keyframe, as none of those frames can now be decoded.
There is no keyframe in range 8 to 10s, so they are all removed
Why do the later appendBuffer calls do nothing? I would expect them to
re-populate the SourceBuffer.
You have removed data, as per spec, the next frame you add must be a keyframe. If the segment you add contains no keyframe, nothing will be added.
If the data you add is made of a single keyframe at the start followed by just P-frame, then you can't remove any frames in the middle without rendering unusable all the ones that follow

Chrome tab crashes when loading a lot of images in Javascript

I have a Javascript image sequence object that uses one <canvas> tag in the DOM, calling clearRect and drawImage quickly to play the sequence. There are 3 different sequences consisting of 1,440 images each, only one sequence needs to be loaded at a time, but having them all queued up will make the experience faster and smoother.
The images are pretty big in dimension, 8680x1920 each, about 1.5mb each as JPG. I have buttons that load each set individually instead of all at once. Everything is fine loading in the first sequence set, but the second one crashes (Aw Snap page) in Chrome 51 in Windows 7 Business.
Dev is happening on my Mac Pro and works perfectly, letting me load all 3 sequences just fine. The specs of my Mac Pro are far lower than the PC. The PC is an i7 quad core, 32gb RAM, 2x M5000 Nvidia Quadro cards with a Sync card. My understanding is that Chrome isn't even utilizing most of those advanced pieces of hardware, but we need them for other parts.
I have tried setting the existing image objects to an empty source then setting them to null before loading in the next sequence, I have also tried removing the <canvas> tag from the DOM, but nothing seems to help. I also find that watching Chrome's Network tab shows the crashes to always happen just after 1.5gb has been transferred. Chrome's Task Manager has the tab hovering around 8gb of memory usage on both Windows and Mac with 1 sequence loaded.
This is an obscure, one-off installation that will be disconnected from the internet, so I'm not concerned so much about security concerns or best practices, just getting it to work through any means necessary.
UPDATED to reflect that I had recently changed the <img> tag to a <canvas> tag for performance reasons
You should not be loading the entire sequence at once. You're most likely running out of RAM. Load only a few frames ahead using Javascript in memory, then assign that image to your image tag. Be sure to clear that look ahead cache by overwriting the variables or using the delete operator.
Secondly, changing the src attribute will cause the entire DOM to redraw. This is because when the src attribute changes, the image is assumed to have possibly changed size, which will cause all elements after might have shifted and need redrawing.
It's a better idea to set the image as the background of a <div> and update the background-image styles. You can also write the image to a <canvas>. In both cases only element needs redrawing.
Finally, a <video> tag would probably be your best option since it's designed to handle frame sequences efficiently. In order to make it possible to scrub to individual frames without lag you can either encode with the keyframe every 1 frames setting, or simply encode the video in an uncompressed format that doesn't use keyframes. A keyframe is like snapshot at a particular interval in a video, all subsequent frames only redraw the parts that have changed since the last keyframe. So if keyframes are far apart, seeking to a particular frame requires the the keyframe be rendered, then all the subsequent frames in between be added to it to get the final image of the frame you're on. By putting a keyframe on every frame, it will make the video larger since it can't use the differential compression, but it will seek must faster.

dynamically generating multiple thumbnails from a video src with javascript

Before you say it can't be done please take a look at my train of thought and entertain me.
I have read on stackoverflow that it can't be done and how to implement this using ffmpeg and other stuff on the server side which is great and simpleish enough to comprehend .. ive even used an extensiion to Video.js i found on github that makes this one step easier. But none the less what if I dont have a copy of the <video src=... > and I really dont care to get one?
I Do not want to use a server to do this Okay with that out of the way, I understand thanks to a post from Paul Irish that video playback is not a shared aspect of web-kit ports (the code which powers basically every browser ... minus chrome canary now using blink a webkit fork) This kinda makes sense why certain browsers only support certain video containers.
So for the sake of simplicity: I want to make this functionality only available on Chrome and only MPEG-4 AVC video containers, why can't this be done if some how I can actually view each frame of the video while its playedback?
additional note
So the generating of video thumbnails is possible using by drawing frames to a canvas, this will only be part of a final solution to my problem, I'm looking to do this each and everytime a video is viewed not store images on my server after a first playback is completed by a user. What I would like to eventually work up to is generating a thumbnail as the video is downloaded that can be viewed while a user uses a dragging scrollbar to ff/rw to a point in the video. So this will need to be done as frames of video come available, not once they have been rendered by the browser for user to view
One can actually feed in a video to the canvas, as seen here in HTML5Doctor. Basically, the line that does the magic is:
canvasContext.drawImage(videoElement,0,0,width,height);
Then you can run a timer that periodically retrieves the frames from the canvas. There are 2 options on this one
get raw pixel data
get the base64 encoded data
As for saving, send the data to the server to reconstruct an image using that data, and save to disk. I also suggest you size your canvas and video to the size you want your screenshots to be since the video-canvas transfer automatically manages scaling.
Of course, this is limited by the video formats that are supported by the browser. As well as support for canvas and video.
Generating thumbnails during first render? You'd run into problems with that since:
You can't generate all frames unless it's rendered on the video element.
Suppose you have generated thumbnails during first run and want to use them for further runs. Base64 data is very long, usually 3 times the file size if the image. Raw pixel data array is width x height x 4 in length. The most viable storage candidate is localStorage, which is just 5-10MB depending on the browser.
No way to cache the images generated into the browser cache (there could be a cache hack that I don't know using data-urls).
I suggest you do it on the server instead. It's too much burden and hassle to do in the client side.

Using Javascript/jquery to access picture content for a frame in a video

Question
I want to write efficient code for playing/manipulating images in a web browser using JavaScript. I think using a video file may cut down http requests. How can I access a png/jpg/bytecode version of a single frame from a video?
Background
Currently, I have a sequence of ~1000 images, which vary ever-so-slightly, that need to be quickly accessible on my page. loading the images via HTTP requests is taking forever(obviously), and as my app grows, it is likely that this number will grow from 1000 to 5000 to 10,000 ...
Ajax requests for individual images will not work, b/c I need the image to load immediately (and don't have time to wait for a new http request).
My idea was to pre-process a video file on the server which shows the image progression- one image per frame-to speed up the rate of download and the browser's performance. I feel like this video could download to the client quickly based on the speed of watching videos online. I'm getting stuck on how to get picture content for a frame out of the video.
HTML5?
Note, I haven't looked into HTML5 yet, but would be willing to consider if it may help.
You can draw video frames to html5 canvas.
I created this fiddle by combining this and this.
Key point is getting the current video frame and drawing it into the canvas element.
var delay=20;
function draw(cvideo,ccanvas,canvas_width,canvas_height) {
if(cvideo.paused || cvideo.ended) return false;
ccanvas.drawImage(cvideo,0,0,canvas_width,canvas_height);
setTimeout(draw,delay,cvideo,ccanvas,canvas_width,canvas_height);
}
After getting it into canvas you can do almost anything with the image.
Instead of using a video file you could use image sprites - basically combine multiple images into a single big image, of which you only always show the appropriate region (assuming all images have the same dimensions this would be easy) - this would largely reduce the number of necessary HTTP requests and speed up the loading process in turn.

Categories

Resources