If you see this image, I'm trying to analyze Video1 with Video4 or Video2 with Video3
In a peer-to-peer WebRTC connection I'm trying to compare the video input of peer1 to the output on peer2. Basically I'm testing the quality of the video, and theoretically I can do that by :
Checking the video itself frame by frame.
Taking a screen shot on both the sides and checking the image resolution.
I've seen this google video where they have mentioned a complex but clever idea of feeding a barcode stitched video and comparing the unique frame ID with each other. But, its written in C and I'm using protractor
Is there anyone who have tried to do calculate an image resolution of an image or analyzing video frames? Any help would be appreciated, thanks
The webrtc testing tool from testrtc has some code related to that. In particular the cam resolution test which extracts the video from the canvas.
If you want to feed a special video stream from a file, that is doable with the use-file-for-fake-video-capture flag.
Related
Background:
I'm working on a video project with 50+ short videos (10min, 720p) that I want to present online. My current architecture is to placing 16 video tags in a 4x4 grid, and then randomly setting their source on load using JavaScript, and on click zooming a video to cover the full screen until clicked again.
The problem:
Each video in 720p webm is around 80mb. With 16 videos that is 1.3GB totally, or 130MB per minute, or 2MB per second. Which is a ridiculous amount of data, I think, maybe I'm wrong. The each video is so big (80mb) is to support the zoom-full-screen feature.
My idea for a solution:
Each video in two resolutions, and use the low resolution for the grid layout, and the higher resolution on the click-to-zoom.
My question: How to make this smooth? Can I preload the high resolution video on click in the background at the position of the low resolution video? And make the shift in the CSS transform? Or is there a better way to do this?
Secondary question: How to host this online? Can I put the videos on vimeo maybe? Right now I'm using wordpress.com hosting.
The normal way to achieve something like that is to encode the video using an adaptive bitrate format. The two primary formats for that would either be HLS or MPEG-DASH. Most online encoding platforms can provide those as outputs. Normally you would encode 5-6 different qualities (this helps with users that are on wifi, where bandwidth might constantly be changing) but you could easily encode it in just two different qualities.
Normally the players would be able to select the right quality automatically, but you can manage that yourself if you want.
If you are going to use HLS, you can use hls.js and its Quality Switch API. For MPEG-DASH, a good player to use would be Shaka Player and then set it like this:
player.configure({enableAdaption: false});
player.selectVideoTrack(trackId);
If you want to switch specifically on fullscreen, just listen for the fullscreen events on the players.
I found this one https://webrtchacks.github.io/WebRTC-Camera-Resolution/ It is about the video and I assume that resolution for photo can be much better than for video for the same camera.
Is there a way to check camera's best resolution and make a photo? (using HTML and JavaScript)
Video and photos are essentially the same thing. There isn't really a "take photo" API like there is a difference on your physical camera or phone.
To take a photo, you just turn on the video camera for a second, save a still frame, and then that is your photo.
Thus, everything that applies for video applies for photo.
I am building a small app that allows to add css3 filters like grayscale in a video and download it. The video won't be longer than 6 seconds. So, I am first loading the video in the canvas and then applying the filter that user demands. Now I want user to be able to download the filtered video. The canvas.toDataURL() is only meant for images. Is there any high level canvas api to achieve this?
Thank you
Not that I know of. I think this is something that should be done server-side. Either send the raw video to the server and tell it what filters were applied so you can re-create the effect on the server OR use the solution proposed here capturing html5 canvas output as video or swf or png sequence? (hint: it's also server side)
Before you say it can't be done please take a look at my train of thought and entertain me.
I have read on stackoverflow that it can't be done and how to implement this using ffmpeg and other stuff on the server side which is great and simpleish enough to comprehend .. ive even used an extensiion to Video.js i found on github that makes this one step easier. But none the less what if I dont have a copy of the <video src=... > and I really dont care to get one?
I Do not want to use a server to do this Okay with that out of the way, I understand thanks to a post from Paul Irish that video playback is not a shared aspect of web-kit ports (the code which powers basically every browser ... minus chrome canary now using blink a webkit fork) This kinda makes sense why certain browsers only support certain video containers.
So for the sake of simplicity: I want to make this functionality only available on Chrome and only MPEG-4 AVC video containers, why can't this be done if some how I can actually view each frame of the video while its playedback?
additional note
So the generating of video thumbnails is possible using by drawing frames to a canvas, this will only be part of a final solution to my problem, I'm looking to do this each and everytime a video is viewed not store images on my server after a first playback is completed by a user. What I would like to eventually work up to is generating a thumbnail as the video is downloaded that can be viewed while a user uses a dragging scrollbar to ff/rw to a point in the video. So this will need to be done as frames of video come available, not once they have been rendered by the browser for user to view
One can actually feed in a video to the canvas, as seen here in HTML5Doctor. Basically, the line that does the magic is:
canvasContext.drawImage(videoElement,0,0,width,height);
Then you can run a timer that periodically retrieves the frames from the canvas. There are 2 options on this one
get raw pixel data
get the base64 encoded data
As for saving, send the data to the server to reconstruct an image using that data, and save to disk. I also suggest you size your canvas and video to the size you want your screenshots to be since the video-canvas transfer automatically manages scaling.
Of course, this is limited by the video formats that are supported by the browser. As well as support for canvas and video.
Generating thumbnails during first render? You'd run into problems with that since:
You can't generate all frames unless it's rendered on the video element.
Suppose you have generated thumbnails during first run and want to use them for further runs. Base64 data is very long, usually 3 times the file size if the image. Raw pixel data array is width x height x 4 in length. The most viable storage candidate is localStorage, which is just 5-10MB depending on the browser.
No way to cache the images generated into the browser cache (there could be a cache hack that I don't know using data-urls).
I suggest you do it on the server instead. It's too much burden and hassle to do in the client side.
I found this link to a page here on StackOverflow about "Creating Audio using Javascript in <audio>", and this page on how to play audio on multiple channels. I found that the iPhone supports the audio tag and the Audio object in Javascript to play single channel audio, but is there a way to play audio on multiple channels?
Maybe I'm over complicating this, so this is what I'm trying to do. I want a way to make a graceful audio player in Javascript that supports transitioning from one audio file to another. The way I was going to implement this is to incrementally reduce the volume on one channel while incrementally increasing the volume on the other channel so I'd get a kind of fade effect. Is there a simpler solution to this using only Javascript? I guess another solution would be to reduce the volume to a certain point, start the new audio file on the same channel, then increase the volume again. This circumvents the need for fading, but I would like to fade if at all possible.
Is this possible? I know the HTML5 spec isn't finished yet, but is there some kind of workaround that you know of? Do any of you have ideas for another approach?
From what I can tell from this post about playing audio in the Android browser, this isn't supported yet, but do any of you know if it will support multiple channel audio once the audio tag is supported? Does opera mini support this?
This is an old question I know :).
iOS Safari does not support multiple audio objects playing at the same time. Also, it is not possible for having a fade-in/out effect for iOS, as the only way to change the volume setting is from the hardware itself. Apple decided to give this ability only to the device user. Volume setting is not writable by javascript. It is not even readable (always returns 1).
You can check out the Safari documentation for iOS for more info.
For Android, to be honest I have no idea.
There's no direct way that I know of to have multiple channels on an audio tag, but check out this blog post on using multiple audio tags to simulate multiple channels. http://www.storiesinflight.com/html5/audio.html
I know this is a total hack but try this trick I came up with...
Go to the page below and type on the home row keys to play a blues riff (type multiple keys at the same time etc.)
http://davealger.com/jthump/
The way this works is to create invisible <iframe> components that play a sound before destroying the frame.
I know it is a total hack and I look forward to better HTML 5 multi-channel audio support in the future.