I'm using VideoJs to play various videos. Some bigger than others.
Here's a simple scenario. A video starts playing that has 100mb length in total with a duration of 10 minutes. If the user skips to minute 2 then a call will be made to the backend to server the whole remaining video.
That's not good as far as user experience goes.The download time can be quite big and the player will be stuck in loading until it's finished.
Ideally what I'd want for it to do is download in chunks of 5-10 seconds.
Honestly javascript isn't my strong point so I don't really know where to being in doing that.
The backend accepts byte ranges. And I also have a Varnish.
Also I'm not opposed to using another video player if the one I'm currently using is not ok or for some reason doesn't support what I'm looking for.
Any pointing in the right direction is greatly appreciated.
For anybody who comes across this question and has the same problem:
https://info.varnish-software.com/blog/caching-partial-objects-varnish
Also make sure that varnish forwards the Range header.
This is quite possibly an issue with your file or server configuration, and not necessarily VideoJS. When you want users to be able to seek beyond the current buffer, you're usually talking about psueudo streaming.
To do this, your server must:
Support byte-range requests (you indicated that your back-end does support this)
Return the correct content-type header
Since you stated your server does support byte-range requests, I'd double check the content-type header.
Also, if you are using H.264 MP4 files, you might need to optimize them for streaming by moving the metadata (MOOV atom) to the beginning of the file. Some video encoders also refer to this as "fast start". A standalone application that can do this to already encoded MP4s is qtfaststart.
Otherwise, VideoJS should support seeking automatically. You can find a number of examples of them on JSFiddle.
You can also try to seek programmatically to see if that behaves any differently:
let player = VideoJS.setup("video");
player.play();
player.currentTime(340); // time to seek to
Related
Can any one guide me on how to achieve this.. I am listing them in pointers..
A linux binary captures frames from the locally attached webcamera
and stores them in a folder. This is a continuous process. The
images are stored numerically.
I have a webserver which gives a output of the latest images received from the webcamera. This is a PHP file which gets the recent most image received and prints out.
What I have now is a javascript which refreshes the image every second and displays in the img tag.
Though it works the output is slow and updates slowly one frame at a time.
I am trying to display the images quickly and in a way it should
look like a mjpeg movie being played (not that it has to be so good
as I learned from the forums that the http does have its overhead)
<script type="text/javascript">
function refresh(){
document.images["pic1"].src="/latimage.php?camid=$selectedcamid&ref=" + new Date();
setTimeout('refresh()', 1000);}
if(document.images)window.onload=refresh;
</script>
<img src='/latimage.php?camid=$selectedcamid' id='pic1'>
Above code works perfect. But my unsatisfied mind wants to display the frames obtained from the webcam quickly..like displaying atleast 3 to 4 frames per second.
As I understood from my searches so far it is not too feasible to do
the refresh act too quickly as the HTTP process does take time.
I am trying to find some details on getting this done using a method
by which I can prefetch 100 frames into a image array (I would call
it buffering) and start displaying one image at a time at the rate
of 3 images / second.
Whiles displaying the images the older images should be removed from
the array and the latest ones fetched should be inserted in the end.
Thus the looping is infinite.
I am sorry for asking too many questions..I am unable to find any proper direction to start off with. I can do the above in .net windows application quite easily but in web browser I am unable to get any ideas. I am not sure if jQuery image array or json or simple javascript would do.
I need some guidance please..
If you need to capture the camera output to disk, then I suggest capturing the camera output as video (at 3 FPS) and then streaming that video file to your browser using WebSockets. Here is an example of doing that. If you are willing to run nginx on your server then live_thumb is a complete solution that captures and streams video via WebSockets.
On the other hand, if your goal is just to view the output of the camera and you don't need to store the video, you could consider using WebRTC and running a browser at both ends and then just hooking up the media stream. In other words one browser (perhaps a headless variant) would run on the system with your camera and would stream the video to your other browser using WebRTC. With WebRTC you could get much higher frame rates and your bandwidth would probably still be significantly lower than sending individual images at a slow frame rate.
I want to randomly seek to different points in a ~30 minute video every 30 seconds. The filesize will be 100mb. When I seek does the player start loading from that point or does it have to load the entire file and then find that time within it?
It depends on the browser. If we are talking about a modern browser then when you seek, they will typically send a new http request to the server containing a Range: header, indicating what "chunk" of the file they want to load. This would only be for a browser utilizing http 1.1 or higher. I think if the browser supports html5 video then you can be fairly certain that they will be using http 1.1. Keep in mind though that the client will typically always be loading something. So if you seek to 5 seconds into the vid it will essentially start loading the entire thing again until another seek happens.
No, it starts loading from the given timestamp, as long as the browser knows the duration of the video.
i'm wondering, since HTML and with javascript are mesmerizing together, if there is a solution in HTML5 to generate a video-file from many images?
For example, you're able to load a video into a canvas and make it appear as greyscaled video, by manipulating the canvas. However, I would like to know,
if there is somewhat a method to generate a video-file out of that greyscaled version. Would make sense, if you want to send the video via whatsapp etc.
Thank you
Here we go:
Article: http://techslides.com/convert-images-to-video-with-javascript
Demo: http://techslides.com/demos/image-video/create.html (select multiple images at once)
Code: [just view the source]
You can download .webm video file
#K3N answer mentions building an encoder. Luckily there is one - https://github.com/antimatter15/whammy - snippet from the article:
You need a video encoder and today I just happened to stumble on Whammy, a real time JavaScript WebM Encoder.
There are currently no built-in API to do video encoding in HTML5. There are work in progress though, to allow basic video and audio recording - but it's not available at this time (audio recording is available in FireFox - it is also limited to streams).
If you are OK with gif animation you can encode the frames as a gif using one of the encoders out there (see below).
For video - there has been attempts, more or less successful, (the project I had in mind does not seem to be available anymore) but there has been issues from one browser to another.
There is the option of building an encoder yourself low-level style, following video encoding and file format specifications. It's doable but it's not a small project.
In any case, encoding video is a pretty performance hungry task even for native compiled applications. Running such a task in the browser will be a even more slow process and probably not practical for many users (and mobile devices will suck on those batteries).
The better approach IMO (at the moment at least, until the aforementioned API becomes available), is to send images to server and have a server in the back handling encoding jobs, then send the result to client. This way you can use multi-threading, offload the client, use native compiled encoders such as ffmpeg, and the resulting video can be streamed back.
Some resources
MediaStream Recording API
Gif encoder 1
Gif encoder 2 (NodeJS)
HTML5 Video recording information and status
Realtime video encoder (NodeJS/ffmpeg)
libvpx (requires emscripten/asm.js)
Hi I have built it using the code provided by tech-slides.
Also I made a template application where you can take list of images and turn them into video format. You have to edit the code according to your own needs. It is only supported in chrome and YouTube though. So basically in whammy.js you turn the images into canvas in a JavaScript file then turn the canvas into video using whammy.js function. You need to set event listener and load the videos into video tag. Whammy.js only produce webp file. To turn it into mp4:
Load it in YouTube then download it using YouTube as mp4. Hope it helps.
Just a follow on from #michal's answer, whammy is no longer maintained, however there's a modern fork of whammy encoder at ts-whammy.
See this answer to get a data URL for an image
import tsWhammy from "ts-whammy/src/libs";
// images can from: canvas.toDataURL(type, encoderOptions)
const images = [
"data:image/webp;base64,UklGRkZg....",
"data:image/webp;base64,UklGRkZg....",
];
// Make a 5 second video
const blob = tsWhammy.fromImageArrayWithOptions(images, { duration: 5 });
console.log(blob.type, blob.size);
Basically trying to play some live audio streams in an app I'm porting to the browser.
Stream example: http://kzzp-fm.akacast.akamaistream.net/7/877/19757/v1/auth.akacast.akamaistream.net/kzzp-fm/
I have tried HTML5 audio tag and jPlayer with no luck. I know next to nothing about streaming audio, however, when I examine the HTTP response header the specified content type is "audio/aacp" (not sure if that helps).
I'm hoping someone with more knowledge of audio formats could point me in the right direction here.
The problem isn't with AAC+ being playable, the issue is with decoding the streaming ACC wrapper called ADTS. The Audio Data Transport Stream [pdf] or "MP4-contained AAC streamed over HTTP using the SHOUTcast protocol" can be decoded and therefore played by only a couple media players (e.g., foobar2000, Winamp, and VLC).
I had the same issue while trying to work with the SHOUTcast API to get HTML5 Audio playback for all the listed stations. Unfortunately it doesn't look like there's anything that can be done from our perspective, only the browser vendors can decide to add support for ADTS decoding. It is a documented issue in Chrome/WebKit. There are 60+ people (including myself) following the issue, which is marked as "WontFix".
I have a 1080p video that I'm displaying in an HTML5 <video> tag on my page.
Is there a simple(ish) javascript method of detecting bandwidth so I can switch out the video for lower quality versions if the user's connection is too slow to stream the video? Similar to the logic behind YouTube's 'auto' video size chooser.
Depending on what player and encoding platform you are using you may be able to use HLS encoding for your videos. HLS stands for HTTP Live Streaming, a protocol developed by Apple for primarily solving this problem (among others).
HLS basically breaks your video file into multiple small files so they can be "pseudo" streamed using a simple Web server. With HLS you can also encode in multiple resolutions and a player might be able to switch to a lower or higher bandwidth.
The only downside is that most of the players use Flash to play HLS encoded content. Check it out in action here: http://www.flashls.org/latest/examples/chromeless/
Here's HLS demo for flowplayer:
http://demos.flowplayer.org/basics/hls.html
And here is a plugin for VideoJS:
https://github.com/videojs/videojs-contrib-hls
To encode in HLS, you can either use ffmpeg for free and upload files to your server:
https://www.ffmpeg.org/ffmpeg-formats.html#hls-1
Or, you can use a cloud-based solution like AWS Transcoder or Brightcove
https://aws.amazon.com/elastictranscoder/
In google chrome at least there are these properties on a video element:
webkitVideoDecodedByteCount: 0
webkitAudioDecodedByteCount: 0
These should be enough to determine how fast the client can decode the video. As the video plays you would keep track of the delta amount of these bytes which gives you bytes/s the client is processing the video.
For a more up to date answer: MPEG-DASH is in the process of replacing HLS. HLS is used mainly in iOS land. Most desktop browsers do not plan to support it, and DASH is the standard they are moving toward. (However, there are plenty of players designed to allow you to use HLS with HTML5 video player like hls.js). DASH players include Bitmovin, Google Shaka, and more. Many people encode for both HLS and DASH currently. HLS also supports fragmented mp4. Please note that you will need to encode your videos correctly server side. Additional resource: http://www.streamingmedia.com/Articles/Articles/Editorial/Featured-Articles/The-State-of-MPEG-DASH-2016-110099.aspx
I hate that feature! It's usually wrong, and if I want to wait 2 hours to load my dang video, than wait I shall! There's no reliable way to accurately measure this without sending a large dummy file to the user and measure the time it took to get to him.
You should count on user input (and remember it correctly! Also unlike YouTube!).
In short, don't take YouTube as an example.
There are paid services that may give you an indication of what bandwidth the other party is on, like netspeed.
The data accuracy may be enough for you, but I haven't had the chance to test it for myself.