I am not able to play MP4 (HD) video on UI received from the django backend. I am using normal javascript on UI and Django on the backend. Please find the backend code snippet:
file = FileWrapper(open(path, 'rb')) #MP4 file path is media/1648477263566_28-03-2022 19:51:05_video.mp4
response = HttpResponse(file, content_type=content_type)
response['Content-Disposition'] = 'attachment; filename=my_video.mp4'
return response
The video plays perfectly on Postman but cant play on the UI screen. The UI code is below:
function getUploadedImageAndVideo(combined_item_id){
request = {}
request["combined_item_id"] = combined_item_id;
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
vdata = this.responseText;
var src1 = document.getElementById('src1');
src1.setAttribute("src", "data:video/mp4;base64,"+vdata);
//src1.setAttribute("src", vdata); //doesnt work either
var src2 = document.getElementById('src2');
src2.setAttribute("src", "data:video/mp4;base64,"+vdata);
//src2.setAttribute("src", vdata); //doesnt work either
return
}
}
xhttp.open("POST", port + host + "/inventory_apis/getUploadedImageAndVideo", true);
xhttp.setRequestHeader("Accept", "video/mp4");
xhttp.setRequestHeader("Content-type", "application/json");
xhttp.setRequestHeader("X-CSRFToken", getToken());
xhttp.send( JSON.stringify(request) );
}
on html side:
<video controls="">
<source type="video/webm" src="" id="src1">
<source type="video/mp4" src="" id="src2">
</video>
Network Response (200 OK) of function call is: "ftypmp42 ... isommp42 ... mdat ... ó! ... °}b ... $¥Ð ..." very long text of the video.
I am not able to play video on the UI Side. Please Help.
Browser used: Chrome and Mozilla.
*An alternative is to directly play from media url but here I want to edit video on backend itself on purpose. So I’m stuck on this issue.
Looking at "ftypmp42 ... isommp42 ... mdat ... ó! ... °}b ... $¥Ð ..."
MP4 is divided into two parts.
First is MOOV for metadata (which needs to be processed, before playback can begin). For example the metadata tells all the byte positions of all the different frames, without this metadata then the decoder cannot begin playback.
Second is MDAT which is the actual media data (the audio/video data without headers, since such info now exists in MOOV instead).
It seems your video has MDAT appearing first so you must wait for the MDAT bytes to pass through before you reach the metadata. In other words, your file must be completely downloaded before it can play.
Solution:
Use a tool to move your MOOV atom to the front of file. You can try commandline tools like FFmpeg or MP4Box or an app like Handbrake.
Related
First of all, hello everyone.
I need to archive videos on Crunchyroll for a project, but no matter how much I reverse engineer, I can't find the main source file.
First of all, i have Blob sourced player like that.
<video id="player0" playsinline="" src="blob:https://static.crunchyroll.com/3740...db01b2" style="display: flex; width: 100%; height: 100%;"></video>
The first problem starts with the fact that the video is streamed instead of being sent directly.
So this solution doesn't work for this case.
<a href="blob:https://static.crunchyroll.com/3740...db01b2" download>Download</a>
After that I realized that Crunchyroll has developed even stronger protection than YouTube because on YouTube I could get the source video by playing with the range parameter.
Then I tried to pull the content with javascript, but I still couldn't get a result.
var xhr = new XMLHttpRequest;
xhr.responseType = 'blob';
xhr.onload = function () {
var recoveredBlob = xhr.response;
var reader = new FileReader;
reader.onload = function () {
var BlobAsDataURL = reader.result;
window.location = BlobAsDataURL;
}
reader.readAsDataURL(recoveredBlob);
}
xhr.open('GET', 'blob:https://static.crunchyroll.com/893...2960');
xhr.send();
When I try to use it, I get either the Cross-Origin error or the file not available error when I try it on the Crunchyroll page.
Then I thought of trying to stream it via VLC player. But when I came to the Network tab, I saw that the broadcast was made in an extremely complex way, not in m3u8 format, so it rotted without trying.
Does anyone know what I can do?
I'm trying to optimize the loading times of audio files in a project where we need to use AudioBufferSourceNode. It requires audio buffer to be loaded..
but can it be possible that i can load say first 10 mins of audio first, and play it while download other part in background. And later create another source node which loads with second part of audio file.
My current implementation loads all of the audio first. Which isn't great as it takes time. My files are 60-70 MB long.
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
I think you can achieve what you want by using the WebCodecs API (which is currently only available in Chrome) but it requires some plumbing.
To get the file as a stream you could use fetch() instead of XMLHttpRequest.
Then you would need to demux the encoded file to get the raw audio data to decode it with an AudioDecoder. With a bit of luck it will output AudioData objects. These objects can be used to get the raw sample data which can then be used to create an AudioBuffer.
There are not many WebCodecs examples available yet. I think the example which shows how to decode an MP4 is the most similar to your use case available so far.
I'm trying to build a web app that would record the audio from browser and send the recorded audio to django API after every 3 seconds for analysis(emotion recognition from voice). I'm using MediaRecorder for recording audio. But only noise is saved in the wave file.
I'm trying to send the recorded audio(as a blob) to the django api. And then on receiving it at backend, I save it as a wav file.
I'm sending the recorded audio like this:
navigator.mediaDevices.getUserMedia({audio:true}).then(stream => {audio_handler(stream)});
var audio_chunks = [];
audio_handler = function(stream){
rec = new MediaRecorder(stream, {mimeType : 'audio/webm', codecs : "opus"});
rec.ondataavailable = function(e){
audio_chunks.push(e.data);
}
}
//on rec.stop()
var blob = new Blob(audio_chunks, {'type':'audio/wav; codecs=opus'});
console.log(blob);
var xhttp = new XMLHttpRequest();
xhttp.open("POST", "http://localhost:8000/er/", true);
var data = new FormData();
data.append('data', blob, 'audio_blob');
xhttp.send(data);
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
console.log(this.responseText);
}
};
Saving on django backend as:
from django.http import JsonResponse
import wave
def get_emotion(request):
print(request.FILES.get('data'))
audio_data = request.FILES.get('data')
print(type(audio_data))
print(audio_data.size)
audio = wave.open('test.wav', 'wb')
audio.setnchannels(1)
audio.setnframes(1)
audio.setsampwidth(1)
audio.setframerate(16000)
blob = audio_data.read()
audio.writeframes(blob) #on playing 'test.wav' only noise can be heard
return JsonResponse({})
Currently the audio file saved just has some noise in it, whereas I expect the wave audio file saved to have same content as the audio spoken while recording.
Please suggest, if there is any other way to do the same thing(record an audio from browser and send it to a django api, to save it as an audio file there).
If any more information is needed, feel free to ask. Thank you!
Wav file format don't support Opus codec.
For Opus codec you need to use webm file format.
So you need to change this
new Blob(audio_chunks, {'type':'audio/wav; codecs=opus'});
to
new Blob(audio_chunks, {'type':'audio/webm; codecs=opus'});
or
new Blob(chunks, { 'type' : 'audio/wav; codecs=MS_PCM' }); //if it is supported.
Make sure the file that you are saving the blob is also of the same file format as send.
I faced the same issue. I advise you to set the original arguments to audio while saving, instead of hard-coding random figures:
obj = wave.open(audio_data, 'r')
audio = wave.open('/../test.wav', 'wb')
audio.setnchannels(obj.getnchannels())
audio.setnframes(obj.getnframes())
audio.setsampwidth(obj.getsampwidth())
audio.setframerate(obj.getframerate())
blob = audio_data.read()
audio.writeframes(blob)
This would set the actual channel, frames, width etc to the audio you are writing without any introduction of noise in you .wav file . Make sure you are using Django==1.8.19 at least
I am trying to do the following:
On the server I encode h264 packets into Webm (MKV) container structure, so that each cluster gets a single frame packet.Only the first data chunk is different as it contains something called Initialization Segment.Here it is explained quite well.
Then I stream those clusters one by one in a binary stream via WebSocket to a broweser, which is Chrome.
It probably sounds weird that I use h264 codec and not VP8 or VP9, which are native codec for Webm Video Format. But it appears that html video tag has no problem to play this sort of video container. If I just write the whole stream to a file and pass it to video.src, it is played fine. But I want to stream it in real-time.That's why I am breaking the video into chunks and sending them over websocket.
On the client, I am using MediaSource API. I have little experience in Web technologies, but I found that's probably the only way to go in my case.
And it doesn't work.I am getting no errors, the streams runs ok, and the video object emits no warning or errors (checking via developer console).
The client side code looks like this:
<script>
$(document).ready(function () {
var sourceBuffer;
var player = document.getElementById("video1");
var mediaSource = new MediaSource();
player.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', sourceOpen);
//array with incoming segments:
var mediaSegments = [];
var ws = new WebSocket("ws://localhost:8080/echo");
ws.binaryType = "arraybuffer";
player.addEventListener("error", function (err) {
$("#id1").append("video error "+ err.error + "\n");
}, false);
player.addEventListener("playing", function () {
$("#id1").append("playing\n");
}, false);
player.addEventListener("progress",onProgress);
ws.onopen = function () {
$("#id1").append("Socket opened\n");
};
function sourceOpen()
{
sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.64001E"');
}
function onUpdateEnd()
{
if (!mediaSegments.length)
{
return;
}
sourceBuffer.appendBuffer(mediaSegments.shift());
}
var initSegment = true;
ws.onmessage = function (evt) {
if (evt.data instanceof ArrayBuffer) {
var buffer = evt.data;
//the first segment is always 'initSegment'
//it must be appended to the buffer first
if(initSegment == true)
{
sourceBuffer.appendBuffer(buffer);
sourceBuffer.addEventListener('updateend', onUpdateEnd);
initSegment = false;
}
else
{
mediaSegments.push(buffer);
}
}
};
});
I also tried different profile codes for MIME type,even though I know that my codec is "high profile.I tried the following profiles:
avc1.42E01E baseline
avc1.58A01E extended profile
avc1.4D401E main profile
avc1.64001E high profile
In some examples I found from 2-3 years ago, I have seen developers using type= "video/x-matroska", but probably alot changed since then,because now even video.src doesn't handle this sort of MIME.
Additionally, in order to make sure the chunks I am sending through the stream are not corrupted, I opened a local streaming session in VLC player and it played it progressively with no issues.
The only thing I suspect that the MediaCodec doesn't know how to handle this sort of hybrid container.And I wonder then why video object plays such a video ok.Am I missing something in my client side code? Or MediacCodec API indeed doesn't support this type of media?
PS: For those curious why I am using MKV container and not MPEG DASH, for example. The answer is - container simplicity, data writing speed and size. EBML structures are very compact and easy to write in real time.
I'm going to just list everything I know since this could all be important.
I bought a simple H264 streaming player from Amazon to test. Specifically this one: http://www.amazon.com/OPR-NH100-Encoder-Broadcast-Recording-replace/dp/B00NIFJYEC/
The settings are as follows:
As you can see I'm using the Main Profile for H264. In Apache2 I'm proxying the connection through localhost. So on Ubuntu in /etc/apache2/sites-enabled/000-default.conf I have:
<VirtualHost *:80>
<Proxy "*">
Allow from all
</Proxy>
ProxyPass /hdmi http://192.168.1.168/hdmi retry=0
ProxyPassReverse /hdmi http://192.168.1.168/hdmi
</VirtualHost>
This is fairly basic. Essentially this bypasses any cross-domain issues since the stream is local.
At this point if I open http://127.0.0.1/hdmi with VLC Media Player's network stream viewer I can see the stream running just fine. (I can play a DVD or stream Cable which both work fine).
So then I tried to use this example here for Media Source Extensions. It works fine with the mp4 file so I modified it to say:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
</head>
<body>
<video controls></video>
<script>
var video = document.querySelector('video');
var assetURL = 'hdmi';
// Need to be specific for Blink regarding codecs
// ./mp4info frag_bunny.mp4 | grep Codec
var mimeCodec = 'video/mp4; codecs="avc1.4D402A, mp4a.40.2"';
if ('MediaSource' in window && MediaSource.isTypeSupported(mimeCodec)) {
var mediaSource = new MediaSource;
//console.log(mediaSource.readyState); // closed
video.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', sourceOpen);
} else {
console.error('Unsupported MIME type or codec: ', mimeCodec);
}
function sourceOpen (_) {
//console.log(this.readyState); // open
var mediaSource = this;
var sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
fetchAB(assetURL, function (buf) {
sourceBuffer.addEventListener('updateend', function (_) {
mediaSource.endOfStream();
video.play();
//console.log(mediaSource.readyState); // ended
});
sourceBuffer.appendBuffer(buf);
});
};
function fetchAB (url, cb) {
console.log(url);
var xhr = new XMLHttpRequest;
xhr.open('get', url);
xhr.responseType = 'arraybuffer';
xhr.onload = function () {
cb(xhr.response);
};
xhr.send();
};
</script>
</body>
</html>
In Firefox it accepts the codec information as valid and makes a successful network connection to the hdmi stream. The issue is it doesn't play. It just spins when I press play with no error.
I thought maybe the codec was wrong. From the above code I'm using:
video/mp4; codecs="avc1.4D402A, mp4a.40.2"
I decided to check to make sure this was right so I ran:
ffprobe -show_streams http://127.0.0.1/hdmi
Input #0, mpegts, from 'http://127.0.0.1/hdmi':
Duration: N/A, start: 1780.838256, bitrate: 130 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0.0[0x100]: Video: h264 (Main), yuv420p, 1920x1080, 90k tbr, 90k tbn, 180k tbc <--- Main Profile so 4D
Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 130 kb/s
[STREAM]
index=0
codec_name=h264
codec_long_name=H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 <--- AVC so avc1
codec_type=video
codec_time_base=1/180000
codec_tag_string=[27][0][0][0]
codec_tag=0x001b
width=1920
height=1080
has_b_frames=0
pix_fmt=yuv420p
level=42 <-- Level is 42 which is 0x2A
id=0x100
r_frame_rate=180000/2
avg_frame_rate=0/0
time_base=1/90000
start_time=1780.911511
duration=N/A
[/STREAM]
[STREAM]
index=1
codec_name=aac <---- AAC = mp4a
codec_long_name=Advanced Audio Coding
codec_type=audio
codec_time_base=1/48000
codec_tag_string=[15][0][0][0]
codec_tag=0x000f
sample_rate=48000.000000
channels=2
bits_per_sample=0
id=0x101
r_frame_rate=0/0
avg_frame_rate=375/8
time_base=1/90000
start_time=1780.838256
duration=N/A
[/STREAM]
This clearly shows Main Profile being used just like I set in the streamer's configuration settings. It also shows Level 42 being used which I believe translates to avc1.4D402A since all the Main Profile examples I've seen use 4D40 then the 2A is 42 in hex. Am I reading this wrong? (I've tried tons of other combinations with no change).
The audio is AAC so mp4a is correct, but I'm not confident about the 40.2 part. I read a few examples online saying to use mp4file so I used VLC to save the raw stream then ran:
mp4file --dump networkhdmi.mp4
But that fails since the stream doesn't start with a valid non-delta frame since it's a stream. Here's the raw stream file if you need to test and see the problem:
http://sirisian.com/randomfiles/networkhdmi.mp4
I even tried to run the file with that raw file inside of the browser instead of using the stream and it failed.
I'm starting to think that FireFox's decoder for MP4 doesn't work if the first frame is a delta frame. ffprobe shows when it analyzes the stream:
non-existing PPS referenced
non-existing PPS 1 referenced
decode_slice_header error
no frame!
Could this be tripping the browser up? Seems like VLC isn't bothered by it.
I think that networkhdmi.mp4 file is a really good testcase since it's exactly what's coming from the stream and shows the problem perfectly. In the code above just change:
var assetURL = 'hdmi';
to:
var assetURL = 'networkhdmi.mp4';
Here's an example on my site:
http://sirisian.com/randomfiles/networkhdmi.html
If you need any other information just ask in a comment and I'll supply it. I've been working on this for two days with no progress. (Also in case anyone asks I've tried using the H264 High Profile and it doesn't work either in the browser. Also tried variable bitrate and tried both Chrome and Firefox latest versions).
One of my friends commented:
the browser cannot process any streams with MSE. It requires completely independent MP4 chunks like what you would get from a MPEG-DASH stream or similar. MP4 chunk should start with AAVCS or whatever extradata, then a IDR keyframe and then delta frames
There are javascript libraries that transmux mpeg2-ts to mp4 so it can be played by MSE; however, they are all based on HLS where the .ts files are separate chunks, and it transmuxes one when the download is done what you will have to do is detect IDR yourself, and stop and transmux everything you have so far and feed that to MSE, continuously. none of them support progressive http download of TS
This seems right? I tried to use the HLS which doesn't work with this stream. My friend mentioned:
HLS is defined as a m3u8 which has relative URLs to TS fragemnts while you have a CONTINUOUS TS since MSE accepts only MP4 and MP4 is not streamable
This sounds like a lot of work to make this function. Since I have a proxy I guess I can try to use ffmpeg with nginx-rtmp or something. I'm gonna go investigate and test.