I'm making a posture analyser using TensorFlow Js and PoseNet. I've made up the code to analyse postures in real time from Webcam. How do I input a local video file to be analysed?
This is the code that loads in Webcam stream. I'm looking for a way to replace this webcam video input with local video file.
canvas.parent('videoContainer');
video = createCapture(VIDEO);
video.size(width, height);
To do so, one can use the tag video event listeners to get the video frames at a framerate chosen as indicated here. This is done completely client side.
The other options would be to do it server side. Using libraries such as ffmpeg, one can convert a video to images as used here
Related
I am using Hls.js to manage a video into my HTML page. I need to build a volume meter that inform the user about the audio level of the video. Since I need to keep the video.muted = true, I am wondering if the there is any way with Hls.js to extract the audio information from the stream and build a volume meter with those. The goal is give the users a feedback without have the volume of the video on.
You can do this easily with the Web Audio API.
Specifically, you'll want a couple nodes:
MediaElementAudioSourceNodeYou will use this to route the audio from your media element (i.e. the video element HLS.js is playing in) to the audio graph.
AnalyserNodeThis node analyzes the audio in chunks, giving you frequency data (via FFT) and time domain data. The time domain data is simplified from the main stream. You can run a min/max on it to get a value (generally -1.0 to +1.0). You can use that value in your visualization.
You also need to connect the AnalyserNode to the AudioContext's destinationNode to output the audio in the end, since it will be re-routed from that video element.
Note that this solution isn't particular to HLS. The same method works on any audio/video element, provided that the source data isn't tainted by cross-origin restrictions. Given how HLS.js works, you won't have to worry about that, since the CORS problem is already solved or it wouldn't work at all.
I have a program that plays songs from the server. To make it more efficient i split the audio file on server into segments and the send them to the client using Ajax as base64 encoded. The HTML5 native audio player plays the base64 audio segment but when playing the next audio segment, it pauses a little and then plays. The retrieved segments are stored in IndexedDB for quick access but still it causes a pause in the playback. How to make the program more efficient as well as fix the audio pause happening between switching segments.
Is there any other way of appending audio file to a currently playing audio source without any pause using Javascript?
The Media Source Extensions API can do that, but note that you are just reinventing Range requests and caching, which are exactly what browsers already do for fetching media, but they do it better since they don't add the overhead of base64 on it.
So the "other way" is to configure your server to accept Range requests, to serve your file the most basically as possible in a single file, and to let the browser do its job.
I am creating an app to trim video where a user either can upload the video file or can paste YouTube URL.
I am able to make the trimmer work for upload video using FFmpeg, but for YouTube URL, firstly I need to download the video to the local machine and then trim it using FFmpeg.
I am using youtube-dl NodeJS library to save the video file to locally, but the issue is as I am allowing the user to download only 15Sec video no more than that, and I have applied validation of 60m long video, yet when I get 1080p video there are two issues.
1. There is no audio in the 1080p video.
and
2. The video is taking too long to download locally and the process is taking almost 7-8mins to complete.
For the first one, I can get the audio and video separately and then merge both of them in the video, but the second one is where I am stacked.
I have added a time log I can see the trimming is taking only about 3-4 seconds, but the downloading is taking too long.
Is there any way to make it work faster? Or, Is there any way I can download only that part of the video, not the full video, in any technology.
I am open to using any technology until it is working faster than my current implementation.
If this is something related to server configuration, please suggest which one would work better.
Is it possible to live change video stream with canvas-node?
A'm using binaryjs to send binary frames over web socket and I want to change video live in node-canvas.
This is the code a'm using Webcam Binary.JS Demo
and this is the library that I want to use to change video with canvas node-canvas
Is that possible? If that is not possible what is the best solution to combine multiple video streams (with nodejs) to one video?
I am going to develop a chat based application for mobile which allows video chat. I am using HTML5, javascript and PhoneGap. Using phoneGap, I am able to access mobile camera, capture a video, save the video and upload it in server. I have done it for android. But I need live broadcasting of the video. Is there any solution of that?
Note: It is not any android native app.
You didn't specify what facility you're currently using for the video capture. AFAIK, current WebView doesn't yet support WebRTC which is the w3 standard that will soon enable you to access the video frames in your HTML5 code. So I'm assuming you're using PhoneGap's navigator.device.capture.captureVideo facility.
On Android, captureVideo creates 3gp files. The problem with 3gp is that they cannot be streamed or played while capturing: the MOOV atom of the file is required for parsing the video frames in it, and it is written only after all frames in the file have been encoded. So you must stop the recording before you can make any use of the file.
Your best shot in HTML5 is to implement a loop that captures a short clip (3-5 seconds?) of video, then sends it to a server while the next chunk is being captured. The server will need to concatenate the clips to a single file that can be broadcast with a streaming server. This will add several seconds to the latency of the broadcast, and you are quite likely to suffer from lost frames at the point in the gap between two separate chunk captures. That might be sufficient for some use cases (security cameras, for example).
If your application is such that you cannot afford to lose frames, I see no other option but to implement the video capture and streaming in Java, as a PhoneGap Plugin.
See Spydroid http://code.google.com/p/spydroid-ipcamera/
It uses the solution with the special FileDescriptor you found. Basically they let the video encoder write a .mp4 with H.264 to the special file descriptor that calls your code on write. Then they strip off the MP4 header and turn the H.264 NALUs into RTP packets.