I have a simple webpage where you can stream your webcam. I would like to take this stream and send it somewhere, but apparently I can't really access the stream itself. I have this code to run the stream:
navigator.webkitGetUserMedia({video: true}, gotStream, noStream);
And in gotStream, I tried many things to "redirect" this stream somewhere else, for example:
function gotStream(stream) {
stream_handler(stream)
//other stuff to show webcam output on the webpage
}
or
function gotStream(stream) {
stream.videoTracks.onaddtrack = function(track){
console.log("in onaddtrack");
stream_handler(track);
}
//other stuff to show webcam output on the webpage
}
But apparently the gotStream function gets called only once at the beginning, when the user grants permissions to the webcam to stream. Moreover the stream variable is not the stream itself but an object with some properties inside. How am I supposed to access the stream itself and redirect it wherever I want?
EDIT: You may be familiar with webglmeeting, a sort of face2face conversation apparently developed on top of WebRTC. I think that script is sending somehow the stream of data from one point to the other. I would like to achieve the same by understanding how to get the stream of data in the first place.
RE-EDIT: I don't want a conversion to image and sending the latter, I would like to work with the stream of data itself.
If you mean to steam your camera to somewhere in PNG or JPG so I will use canvas like this
HTML
<video id="live" width="320" height="240" autoplay></video>
<canvas width="320" id="canvas" height="240" style="display:none;"></canvas>
JS ( jQuery )
var video = $("#live").get()[0];
var canvas = $("#canvas");
var ctx = canvas.get()[0].getContext('2d');
navigator.webkitGetUserMedia("video",
function(stream) {
video.src = webkitURL.createObjectURL(stream);
}
)
setInterval(
function () {
ctx.drawImage(video, 0, 0, 320,240);
var data = canvas[0].toDataURL("image/jpeg");
},1000);
Related
I'm currently using node-lame to encode a raw PCM input stream, and I have the following code in Node.JS that successfully outputs binary MP3 chunks:
server.on('request', (req, res) => {
encoded.pipe(res);
});
I try to request this code inside of my front-end interface with code like the following:
var audio = new Audio('http://localhost:8000/a.mp3'); // the above
audio.play();
However, as the audio source is a continuous input stream, the content just keeps getting downloaded without end:
Instead, I want to be able to play the chunks as they are downloaded.
I can access http://localhost:8000/a.mp3 in an application like VLC or Quicktime Player, and the audio delivery works fine; I'm just stumped as to how to best do this on the web.
Thanks in advance.
This code works for us:
<audio id="music" preload="all">
<source src="http://localhost:8000/a.mp3">
</audio>
<script>
let music = document.getElementById('music');
music.play();
</script>
I am trying to do the following:
On the server I encode h264 packets into Webm (MKV) container structure, so that each cluster gets a single frame packet.Only the first data chunk is different as it contains something called Initialization Segment.Here it is explained quite well.
Then I stream those clusters one by one in a binary stream via WebSocket to a broweser, which is Chrome.
It probably sounds weird that I use h264 codec and not VP8 or VP9, which are native codec for Webm Video Format. But it appears that html video tag has no problem to play this sort of video container. If I just write the whole stream to a file and pass it to video.src, it is played fine. But I want to stream it in real-time.That's why I am breaking the video into chunks and sending them over websocket.
On the client, I am using MediaSource API. I have little experience in Web technologies, but I found that's probably the only way to go in my case.
And it doesn't work.I am getting no errors, the streams runs ok, and the video object emits no warning or errors (checking via developer console).
The client side code looks like this:
<script>
$(document).ready(function () {
var sourceBuffer;
var player = document.getElementById("video1");
var mediaSource = new MediaSource();
player.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', sourceOpen);
//array with incoming segments:
var mediaSegments = [];
var ws = new WebSocket("ws://localhost:8080/echo");
ws.binaryType = "arraybuffer";
player.addEventListener("error", function (err) {
$("#id1").append("video error "+ err.error + "\n");
}, false);
player.addEventListener("playing", function () {
$("#id1").append("playing\n");
}, false);
player.addEventListener("progress",onProgress);
ws.onopen = function () {
$("#id1").append("Socket opened\n");
};
function sourceOpen()
{
sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.64001E"');
}
function onUpdateEnd()
{
if (!mediaSegments.length)
{
return;
}
sourceBuffer.appendBuffer(mediaSegments.shift());
}
var initSegment = true;
ws.onmessage = function (evt) {
if (evt.data instanceof ArrayBuffer) {
var buffer = evt.data;
//the first segment is always 'initSegment'
//it must be appended to the buffer first
if(initSegment == true)
{
sourceBuffer.appendBuffer(buffer);
sourceBuffer.addEventListener('updateend', onUpdateEnd);
initSegment = false;
}
else
{
mediaSegments.push(buffer);
}
}
};
});
I also tried different profile codes for MIME type,even though I know that my codec is "high profile.I tried the following profiles:
avc1.42E01E baseline
avc1.58A01E extended profile
avc1.4D401E main profile
avc1.64001E high profile
In some examples I found from 2-3 years ago, I have seen developers using type= "video/x-matroska", but probably alot changed since then,because now even video.src doesn't handle this sort of MIME.
Additionally, in order to make sure the chunks I am sending through the stream are not corrupted, I opened a local streaming session in VLC player and it played it progressively with no issues.
The only thing I suspect that the MediaCodec doesn't know how to handle this sort of hybrid container.And I wonder then why video object plays such a video ok.Am I missing something in my client side code? Or MediacCodec API indeed doesn't support this type of media?
PS: For those curious why I am using MKV container and not MPEG DASH, for example. The answer is - container simplicity, data writing speed and size. EBML structures are very compact and easy to write in real time.
I want to record audio from video element alongside recording from canvas.
I have
var stream = canvas.captureStream(29);
Now I am adding audioTrack of video to the stream.
var vStream = video.captureStream();
stream.addTrack(vStream.getAudioTracks()[0]);
But this slows down the performance with every video added. As captureStream() is very heavy on video and it also requires a flag to be switched on in Chrome. Is there a way of creating only audio MediaStream from video element without using captureStream().
Yes, you can use the Web Audio API's method createMediaElementSource which will grab the audio from your mediaElement, and then the createMediaStreamDestination method, which will create an MediaStreamDestination node, which contains an MediaStream.
You then just have to connect it all, and you've got your MediaStream with your MediaElement's audio.
// wait for the video starts playing
vid.play().then(_=> {
var ctx = new AudioContext();
// create an source node from the <video>
var source = ctx.createMediaElementSource(vid);
// now a MediaStream destination node
var stream_dest = ctx.createMediaStreamDestination();
// connect the source to the MediaStream
source.connect(stream_dest);
// grab the real MediaStream
out.srcObject = stream_dest.stream;
out.play();
});
The video's audio will be streamed to this audio elements : <br>
<audio id="out" controls></audio><br>
The original video element : <br>
<video id="vid" crossOrigin="anonymous" src="https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4?dl=0" autoplay></video>
Note that you could also connect more sources to this stream, and also that you can combine it with an other video stream with the new MediaStream([stream1, stream2]) Constructor (It's currently the only way to combine different streams on FF, until this bug is fixed, should be soon though).
If I have the base64 of a video file, is there any way I can grab all the frames in javascript without having to play through the entire video or sending the video back to the server?
I am working on a webpage that takes a video, converts it into ascii-art, and plays it. At first, I thought the best way would be to upload the video to the server, decode and convert it there, and then respond with the converted video; however, since I don't compress the output "video" (actually just a huge blob of text) the response is huge and takes a large amount of time to transfer.
I know I can do something like this if I parse the video on the front-end (not sure if this code is missing some things, but it conveys the general idea):
var frames = [];
var context = document.getElementById('canvas').getContext('2d');
var video = document.createElement('video');
video.src = base64Value;
function callback() {
context.drawImage(video, 0, 0);
frames.push(grabFrameFromCanvasContext(context));
if (video.currentTime < video.duration) {
setTimeout(callback, 50);
}
}
callWhenVideoStartsPlaying(callback);
But it takes however long the video is to parse. This makes sense for most cases since the browser would be streaming the video from somewhere, but since the source of the video is base64, is there a better way to do this?
I am developing a system where the mobile device camera is accessed in the browser and the camera stream frames are send to the other side synchronously. The sent frames are processed further on the other side .I have drawn the frames to the canvas with a time interval as of the below code. How do i send the accessed frames to the other side for the further processing of frames to happen synchronously? each frame drawn on the canvas is to be sent to the other side for the further process to happen on each image frame. The other side code is in native language.
$<!DOCTYPE html>
<html>
<h1>Simple web camera display demo</h1>
<body>
<video autoplay width="480" height="480" src=""></video>
<canvas width="600" height="480" style="" ></canvas>
<img src="" width="100" height="100" ></img>
<script type="text/javascript">
var video = document.getElementsByTagName('video')[0],
heading = document.getElementsByTagName('h1')[0];
if(navigator.getUserMedia) {
navigator.getUserMedia('video', successCallback, errorCallback);
function successCallback( stream ) {
video.src = stream;
}
function errorCallback( error ) {
heading.textContent =
"An error occurred: [CODE " + error.code + "]";
}
} else {
heading.textContent =
"Native web camera streaming is not supported in this browser!";
}
draw_interval = setInterval(function()
{
var canvas = document.querySelector('canvas');
var ctx = canvas.getContext('2d');
var frames = document.getElementById('frames');
ctx.drawImage(document.querySelector("video"), 0, 0);
}, 33)
</script>
</body>
</html>
I'm not quite sure what you mean bye "other side" and "native language".
But, you can send canvas images to a server using AJAX.
The server receives the canvas image as base64 encoded image data.
For example, assume:
You’re sending the image to a php server (yourOtherSide.php) – but of course this could be any server that accepts ajax posts.
You have a reference to the canvas element holding your frame: canvas
Your ajax payload contains an id number of the frame being sent (ID) and the image data (imgData).
(optionally) You are getting some response back from the server—even if it’s just “OK”: anyReturnedStuff
Then your ajax post would be:
$.post(“yourOtherSide.php”, { ID:yourFrame ID, imgData: canvas.toDataURL(‘image/jpeg’) })
.done( function(anyReturnedStuff){ console.log(anyReturnedStuff); } );
[Edited to include server-side creation of images from the ajax post]
These code samples will receive the ajax base64 imageData and create an image for you to process with your c-image-processing-library.
If your're using a PHP server:
$img = imagecreatefromstring(base64_decode($imageData));
// and process $img with your image library here
or if you're using asp.net:
byte[] byteArray = System.Convert.FromBase64String(imageData);
Image image;
using(MemoryStream s=new MemoryStream(byteArray){
image=Image.FromStream(ms);
// and process image with your image library here
}