I have a video file "MyVideo.mp4" (*.mp4 is not required. It can be in any others formats) and I have an audio file "MyAudio.mp3".
Does anyone have any ideas how to replace an audio track from the video on the audio file "MyAudio.mp3"?
Demultiplexing + remultiplexing ("splitting" + "recombining") is something you would like to do on server side using a software like FFMpeg or similar.
Doing this is JavaScript will be non-trivial as you would have to parse and save the file formats manually (you must be able to parse all formats you'd support).
Related
Similar to the Pure Data example here (https://www.youtube.com/watch?v=FLK854QyjE4) I'm trying to conceptualise how a file, such as a text file, might be read in Javascript and played back as glitch audio with the web audio api.
What I've tried so far:
A text string to base64 string appended with data:audio/mp3;base64, - results in "No video with supported format and MIME type found".
Text to binary with audio headers added - just does nothing.
How would I go about this?
So I use FFmpeg to manipulate my audio files but there is no way of knowing what it will sound like before I actually upload the file(HTML form).
Is there a way to edit the audio with a real-time player or something?
I cannot find anything about this, thats why I am asking this question.
$cmd = 'ffmpeg.exe -i path/to/audio.mp3 -filter:a asetrate=54000,atempo=' . $speed . ' output.mp3 2>&1';
exec($cmd, $output);
So can I replace atempo=$speed with real-time input? And listen to it on my website before I actually process it.
Not sure if it is even possible with FFmpeg.
A real time editor does not seem possible with FFmpeg. You can stream real time, but you cannot output actual edits into your website.
I have the contents of an mp3 file saved as a variable, (this isn't meant to be practical, just for a little project of mine), and I wish to play the contents of this variable as an audio file. This would be a simple task with something like node, but unfortunately I must do this entirely client side.
Please note I can not just save the content of the string as an mp3 file, I need to be able to play it from a variable.
I have looked into this, but from what I have found, it appears that this can not be done. If any of you have a solution, I would appreciate hearing it.
This is not very practical, as you're going to get very high memory footprints within the JS engine and will likely cause unnecessary garbage collection... but it is possible to a base64 encode the MP3 which can then be fed into the src attribute of an <audio> tag.
Because it is unrealistically to provide a base64 encoded MP3 in an answer here I'll provide a Fiddle: https://jsfiddle.net/4t6bg95z/1/
But the gist of the code can be something like:
var audio = document.getElementById('audio');
audio.src = "data:audio/mp3;base64,..."; //This is a base64 encoded string of an MP3 file
window.beep = function() {
audio.play();
}
Obviously, it is much better practice to provide a URL to the audio source instead, as that's the intended usage of the Audio API.
I use Filereader to read local video file (mp4), so I can display it in video tag.
I need to cut part of mp4 file (i.e. from 5 to 10 seconds) and upload it on server.
My current solution: I upload whole video file on server with "from" and "to" parameters, cut it with ffmpeg on server, upload to s3 and return the url video.
Maybe is it possible only with JS/HTML? I found Blob.slice method but i didn't know how to use it to cut video parts.
Thanks!
An mp4 video file is made up of 'atoms' which are like blocks of information or data within a file.
They contain header and metadata about the tracks in the movie (Audio, video, subtitles etc) and also the media data itself.
The concepts are straightforward but an mp4 file is quite involved when you look at one - there is a good example here from the apple developers site (https://developer.apple.com/library/content/documentation/QuickTime/RM/Fundamentals/QTOverview/QTOverview_Document/QuickTimeOverview.html):
If you take a 'slice' of the mp4 file by simply taking bytes from some point in the file to some other point, you can see that you will be missing header information etc depending where you start from, and will also most likely start in the middle of an 'atom'.
Tools like ffmpeg do the hard work to extract and restructure the file when you want to cut part of the video.
There are projects which run ffmpeg in the bowser, but I'm not sure how practical or adopted they are - the one seems pretty popular anyway:
https://github.com/bgrins/videoconverter.js
I have two videos one for streaming my webcam and the other to share my desktop screen/window. I need to combine these two media streams into one so that I can save it as an .mp4 file and broadcast it over WebRTC.
I was able to combine the two VIDEO(streams) tags by bounding them into a DIV tag.
<div id="elementToShare">
<video id="webcamVideo" controls loop autoplay class="webcam">No Support.</video>
<video id="screenshareVideo" controls loop autoplay class="screenshare">No Support.</video>
</div>
Then I used the DIV tag as a canvas to record the two videos as one using RecordRTC library by Muaz Khan.
var elementToShare = document.getElementById('elementToShare');
var canvasRecorder = RecordRTC(elementToShare,{
type : 'canvas',
recorderType: CanvasRecorder
});
But the problem with this approach was, I was unable to record the audio stream from my webcam with the canvas for which i used ffmpeg_asm.js. As the js file is 18mb in size it takes a lot of time to load and process the video file.
As to my knowledge, WebRTC is still in its primary phase and I hope something efficient will come along to achieve such stuff.
NOTE: I was able to achieve the above functionality in Google Chrome only as at this stage Mozilla Firefox provided limited support. Whereas Safari has not provided WebRTC support so it was out of the picture from the beginning.
EDIT 1 : Serving the above application with Node.js as server support has improved performance and video processing capabilities. For example : The ffmpeg_asm.js is unable to process video bit rate larger than 2200k without Node.js.
Are you going to combine the streams in real-time?
If yes, you need an MCU which will merge these two streams into one stream and record this stream for you.
As an option you can record these streams on server-side and then mix the streams using ffmpeg.
Found this in a comment on another question that I just responded to with a complete proof-of-concept solution. Anyone that finds this - a solution can be found here.
Note: The code there uses video/webm and the vp9 codec.. those lines can be easily replaced to generate a video/mp4 file using h264 codec instead :)