I'm using HTML5 Media source extensions (MSE) to stream a video using DASH. I created my media segments using MP4Box from a MP4 file with two video tracks in it. So what I have is a single initialisation segment with moov->sidx boxes and multiple media-segments moof->mdat containing both video tracks. If I push them to the sourceBuffer with appendBuffer function, MSE decodes and shows only the first video track (I assume that the data from the second video track is just discarded).
Here are the relevant parts from my code:
sourceBuffer = mediaSource.addSourceBuffer(stats.mimeType); // mime type: video/mp4
...
// after downloading mediasegment append its contents 'data' to sourceBuffer
sourceBuffer.appendBuffer(new Uint8Array(data));
So my question is, is it possible to control the sourceBuffer in such a way that the client can select which track to decode? I suppose that splitting the video tracks to different adaptation sets (creating separate mp4 files with a single video track in each one) could be a solution but I'm not interested in this approach.
Thank you guys.
Okay I found a way how to do this. Using videoTracks attribute we can access the video tracks and change its selected attribute. So it was pretty simple to do.
Here is an example how to toggle the tracks after third segment was downloaded:
if(segmentCnt==3 && sourceBuffer.videoTracks.length == 2)
{
console.log('tracks cnt: ' + sourceBuffer.videoTracks.length);
for(var i=0; i<sourceBuffer.videoTracks.length; i=i+1) {
var trackID = sourceBuffer.videoTracks[i].id;
var trackSelected = sourceBuffer.videoTracks[i].selected;
console.log('trackID: ' + trackID + ' selected: ' + trackSelected);
sourceBuffer.videoTracks[i].selected = !trackSelected;
}
}
Related
I'm planning out a project for a very simple audio streamer that would take a mysql database list of songs hosted on my local server, and then randomly stream a song from the list constantly throughout the day. This would be attached to a very simple frontend page that would display the name of the currently playing song.
I know how to get a random file from the database and stream it using a Javascript/HTML front end, but I'm getting lost on how to detect when a song is over - and then load the next song. Is there a simple way to do this? Can someone point me in the right direction?
EDIT: To elaborate on the frontend, I'd most likely be serving up the filename/location via PHP into the HTML5 audio tag (again I want this as simple as possible). I'm thinking the simplest way would be to play the audio file and then refresh the page and play the next file using the same audio tag and new filename - I just don't know how to cause that event to happen at the end of the song. Alternatively I could use a Javascript based player like JPlayer (http://www.jplayer.org/) if I need to.
I'm guessing there's a callback function of some kind I can use in conjunction with JQuery?
HTML5 audio tag has an event "onended" which is run when the media reaches its end, but since you wish to keep playing you should use the "onwaiting" event, which also fires when the media reaches its end, but keeps itself ready to accept a new track/data.
You can then use the XMLHttpRequest object to query for the next track to play, eg.
<script type="text/javascript">
function getNextTrack(e) {
var xhttp = new XMLHttpRequest();
xhttp.open("GET", "next_track.php", false);
xhttp.send("");
var playback = xhttp.responseXML.childNodes[0];
for(i = 0; i < playback.childNodes.length; ++i) {
if (playback.childNodes[i].nodeName != 'track') continue;
var value = playback.childNodes[i].childNodes[0].nodeValue;
e.currentTarget.src = value;
break;
}
}
</script>
<audio id="player" onwaiting="javascript: getNextTrack(e)" src="first_track.ogg"></audio>
The xml would be in the form of:
<?xml version="1.0" encoding="UTF-8" ?>
<playback>
<track>next_song.ogg</track>
</playback>
I'm building an application using PizzicatoJS + HowlerJS. Those libraries essentially allow me to play multiple audio files at the same time. Imagine a 4 audio tracks with each track containing an instrument like guitar, bass, drums, vocals, etc..
Everything plays fine when using PizzicatoJS's Group functionality or running a forEach loop on all my Howl sounds and firing .play(). However, I would like to download the final resulting sound I am hearing from my speakers. Any idea on how to approach that?
I looked into OfflineAudioContext, but I am unsure on how to use it to generate an audio file. It looks like it needs an Audio source like an <audio> tag. Is what I'm trying to do possible? Any help is appreciated.
I think the OfflineAudioContext can help with your use case.
Let's say you want to create a file with a length of 10 seconds. It should contain one sound playing from the start up to second 8. And there is also another sound which is supposed to start at second 5 and should last until the end. Both sounds are AudioBuffers (named soundBuffer and anotherSoundBuffer) already.
You could arrange and combine the sounds as follows.
const sampleRate = 44100;
const offlineAudioContext = new OfflineAudioContext({
length: sampleRate * 10,
sampleRate
});
const soundSourceNode = new AudioBufferSourceNode({
buffer: soundBuffer
});
soundSourceNode.start(0);
soundSourceNode.stop(8);
soundSourceNode.connect(offlineAudioContext.destination);
const anotherSoundSourceNode = new AudioBufferSourceNode({
buffer: anotherSoundBuffer
});
anotherSoundSourceNode.start(5);
anotherSoundSourceNode.stop(10);
anotherSoundSourceNode.connect(offlineAudioContext.destination);
offlineAudioContext
.startRendering()
.then((audioBuffer) => {
// save the resulting buffer as a file
});
Now you can use a library to turn the resulting AudioBuffer into an encoded audio file. One library which does that is for example audiobuffer-to-wav.
I already looked at this question -
Concatenate parts of two or more webm video blobs
And tried the sample code here - https://developer.mozilla.org/en-US/docs/Web/API/MediaSource -- (without modifications) in hopes of transforming the blobs into arraybuffers and appending those to a sourcebuffer for the MediaSource WebAPI, but even the sample code wasn't working on my chrome browser for which it is said to be compatible.
The crux of my problem is that I can't combine multiple blob webm clips into one without incorrect playback after the first time it plays. To go straight to the problem please scroll to the line after the first two chunks of code, for background continue reading.
I am designing a web application that allows a presenter to record scenes of him/herself explaining charts and videos.
I am using the MediaRecorder WebAPI to record video on chrome/firefox. (Side question - is there any other way (besides flash) that I can record video/audio via webcam & mic? Because MediaRecorder is not supported on not Chrome/Firefox user agents).
navigator.mediaDevices.getUserMedia(constraints)
.then(gotMedia)
.catch(e => { console.error('getUserMedia() failed: ' + e); });
function gotMedia(stream) {
recording = true;
theStream = stream;
vid.src = URL.createObjectURL(theStream);
try {
recorder = new MediaRecorder(stream);
} catch (e) {
console.error('Exception while creating MediaRecorder: ' + e);
return;
}
theRecorder = recorder;
recorder.ondataavailable =
(event) => {
tempScene.push(event.data);
};
theRecorder.start(100);
}
function finishRecording() {
recording = false;
theRecorder.stop();
theStream.getTracks().forEach(track => { track.stop(); });
while(tempScene[0].size != 1) {
tempScene.splice(0,1);
}
console.log(tempScene);
scenes.push(tempScene);
tempScene = [];
}
The function finishRecording gets called and a scene (an array of blobs of mimetype 'video/webm') gets saved to the scenes array. After it gets saved. The user can then record and save more scenes via this process. He can then view a certain scene using this following chunk of code.
function showScene(sceneNum) {
var sceneBlob = new Blob(scenes[sceneNum], {type: 'video/webm; codecs=vorbis,vp8'});
vid.src = URL.createObjectURL(sceneBlob);
vid.play();
}
In the above code what happens is the blob array for the scene gets turning into one big blob for which a url is created and pointed to by the video's src attribute, so -
[blob, blob, blob] => sceneBlob (an object, not array)
Up until this point everything works fine and dandy. Here is where the issue starts
I try to merge all the scenes into one by combining the blob arrays for each scene into one long blob array. The point of this functionality is so that the user can order the scenes however he/she deems fit and so he can choose not to include a scene. So they aren't necessarily in the same order as they were recorded in, so -
scene 1: [blob-1, blob-1] scene 2: [blob-2, blob-2]
final: [blob-2, blob-2, blob-1, blob-1]
and then I make a blob of the final blob array, so -
final: [blob, blob, blob, blob] => finalBlob
The code is below for merging the scene blob arrays
function mergeScenes() {
scenes[scenes.length] = [];
for(var i = 0; i < scenes.length - 1; i++) {
scenes[scenes.length - 1] = scenes[scenes.length - 1].concat(scenes[i]);
}
mergedScenes = scenes[scenes.length - 1];
console.log(scenes[scenes.length - 1]);
}
This final scene can be viewed by using the showScene function in the second small chunk of code because it is appended as the last scene in the scenes array. When the video is played with the showScene function it plays all the scenes all the way through. However, if I press play on the video after it plays through the first time, it only plays the last scene.
Also, if I download and play the video through my browser, the first time around it plays correctly - the subsequent times, I see the same error.
What am I doing wrong? How can I merge the files into one video containing all the scenes? Thank you very much for your time in reading this and helping me, and please let me know if I need to clarify anything.
I am using a element to display the scenes
The file's headers (metadata) should only be appended to the first chunk of data you've got.
You can't make an new video file by just pasting one after the other, they've got a structure.
So how to workaround this ?
If I understood correctly your problem, what you need is to be able to merge all the recorded videos, just like if it were only paused.
Well this can be achieved, thanks to the MediaRecorder.pause() method.
You can keep the stream open, and simply pause the MediaRecorder. At each pause event, you'll be able to generate a new video containing all the frames from the beginning of the recording, until this event.
Here is an external demo because stacksnippets don't works well with gUM...
And if ever you needed to also have shorter videos from between each resume and pause events, you could simply create new MediaRecorders for these smaller parts, while keeping the big one running.
Trying to follow the example here, which is basically a c&p of this
Think I got most of the parts down, except all the node.connect()'s
From what I understand, this sequence of code is needed to provide the audio analyzer with an audio stream:
var source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);
analyser.connect(audioCtx.destination);
I can't seem to make sense of it as it looks rather ouroboros-y to me.
And unfortunately, I can't seem to find any documentation on .connect() so quite lost and would appreciate any clarification!
Oh and I'm loading an .mp3 via pure javascript new Audio('db.mp3').play(); and am trying to use that as the source without creating an <audio> element.
Can a mediaStream object be created from this to feed into .createMediaStreamSource(stream)?
connect simply defines the output for the filters.
In this case, your source loads the stream into the buffer and writes to the input of the next filter which is defined by the connect function. This is repeated for your analyser filter.
Think of it as pipes.
here is a sample code snippet that I have written a few years back using web audio api.
this.scriptProcessor = this.audioContext.createScriptProcessor(this.scriptProcessorBufferSize,
this.scriptProcessorInputChannels,
this.scriptProcessorOutputChannels);
this.scriptProcessor.connect(this.audioContext.destination);
this.scriptProcessor.onaudioprocess = updateMediaControl.bind(this);
//Set up the Gain Node with a default value of 1(max volume).
this.gainNode = this.audioContext.createGain();
this.gainNode.connect(this.audioContext.destination);
this.gainNode.gain.value = 1;
sewi.AudioResourceViewer.prototype.playAudio = function(){
if(this.audioBuffer){
this.source = this.audioContext.createBufferSource();
this.source.buffer = this.audioBuffer;
this.source.connect(this.gainNode);
this.source.connect(this.scriptProcessor);
this.beginTime = Date.now();
this.source.start(0, this.offset);
this.isPlaying = true;
this.controls.update({playing: this.isPlaying});
updateGraphPlaybackPosition.call(this, this.offset);
}
};
So as you can see that my source is connected to a gainNode, which is connected to a scriptProcessor. When the audio starts playing, the data is passed from the source->gainNode->destination and source->scriptProcessor->destination. flowing through the "pipes" that connects them, which is defined by connect(). When the audio data pass through the gainNode, volume can be adjusted by changing the amplitude of the audio wave. After that it is passed to the script processor so that events can be attached and triggered while the audio is being processed.
Is it possible to dynamically create a HTML5 video element so that I can access the element by API's like document.getElementById or Name but it may not show up in the webpage.
Something like div.hide() or something in that direction ?
You can try
var video = document.createElement('video');
video.src = 'urlToVideo.ogg';
video.autoplay = true;
you can also use the canPlayType method to check if the browser supports the video format you want to use before setting source
if (video.canPlayType('video/ogg').length > 0) {
/* set some video source */
}
The method returns maybe or perhaps depending on browser. If empty string it means it can't play it.
You can now use the video using the API. Just store it globally. You can later insert it into the DOM. Hope this helps.
Sure you can create everything just using JS. You need nothing to be pre-created in html body.
Here is simple way of creating video element in JS:
var videlem = document.createElement("video");
/// ... some setup like poster image, size, position etc. goes here...
/// now, add sources:
var sourceMP4 = document.createElement("source");
sourceMP4.type = "video/mp4";
sourceMP4.src = "path-to-video-file.mp4";
videlem.appendChild(sourceMP4);
//// same approach add ogg/ogv and webm sources
Before doing this, you should check if browser supports video element, and if so, which file formats can be played. This you can do by:
var supportsVideoElement = !!document.createElement('video').canPlayType;
Then, if video element is supported, test which video formats can be played:
var temp = document.createElement('video');
var canPlay_MP4 = temp.canPlayType('video/mp4; codecs="avc1.42E01E,mp4a.40.2"');
var canPlay_OGV = temp.canPlayType('video/ogg; codecs="theora,vorbis"');
var canPlay_WEMB = temp.canPlayType('video/webm; codecs="vp8,vorbis"');
After this, you can add video element to your page using JS only, with proper video sources set. There may be an issue with .htaccess on server side where you need to add lines:
AddType video/ogg .ogv
AddType video/ogg .ogg
AddType video/mp4 .mp4
AddType video/webm .webm
This may not be needed, depending on how your server is set, but if you encounter issue with playing videos from your server, but they play fine from eg. localhost on your dev machine, this can solve the issue. .htaccess with above lines should be placed in the folder where video files are located, on server side.
Ok now, in order to have this element available with getElementById(...), you just need to set id of it, when you create it:
var videlem = document.createElement("video");
videlem.id = "xxxxxx";
And now you can later find it using:
var videlem = document.getElementById("xxxxxx");
However, as someone commented already, you don't need to do this if you have already created the element and have variable pointing to it... just use it directly.
Hope this helps :-)
Updated (and simplest) way to achieve this (since Google searches are leading here):
var x = document.createElement("VIDEO");
if (x.canPlayType("video/mp4")) {
x.setAttribute("src","movie.mp4");
} else {
x.setAttribute("src","movie.ogg");
}
x.setAttribute("width", "320");
x.setAttribute("height", "240");
x.setAttribute("controls", "controls");
document.body.appendChild(x);