Grabbing all frames of a base64 video in javascript - javascript

If I have the base64 of a video file, is there any way I can grab all the frames in javascript without having to play through the entire video or sending the video back to the server?
I am working on a webpage that takes a video, converts it into ascii-art, and plays it. At first, I thought the best way would be to upload the video to the server, decode and convert it there, and then respond with the converted video; however, since I don't compress the output "video" (actually just a huge blob of text) the response is huge and takes a large amount of time to transfer.
I know I can do something like this if I parse the video on the front-end (not sure if this code is missing some things, but it conveys the general idea):
var frames = [];
var context = document.getElementById('canvas').getContext('2d');
var video = document.createElement('video');
video.src = base64Value;
function callback() {
context.drawImage(video, 0, 0);
frames.push(grabFrameFromCanvasContext(context));
if (video.currentTime < video.duration) {
setTimeout(callback, 50);
}
}
callWhenVideoStartsPlaying(callback);
But it takes however long the video is to parse. This makes sense for most cases since the browser would be streaming the video from somewhere, but since the source of the video is base64, is there a better way to do this?

Related

How to stream PCM audio on HTML without lag?

The PCM audio data is captured in Unity3D in real time. All those data will be streaming to HTML via WebSockets. The general setup is Socket.IO with node.js server.
My major task is adding smooth audio playback for live video+audio streaming solution on All platform. This is my working progress(video streaming): https://youtu.be/82_-a7WF3vs
The audio & video streaming part works well on non-html/non-WebGL platforms.
However, I couldn't make smooth audio playback on html with javascript. It runs real-time but I found some lagging issue like noise...
One of my concern is that Web Browsers do not support multi-threading, it added some lag when receiving streaming data and playback at the same time.
below is my core script for PCM playback. Hope someone can help me improve it.
var startTime = 0;
var audioCtx = new AudioContext();
function ProcessAudioData(_byte) {
ReadyToGetFrame_aud = false;
//read meta data
SourceSampleRate = ByteToInt32(_byte, 0);
SourceChannels = ByteToInt32(_byte, 4);
//conver byte[] to float
var BufferData = _byte.slice(8, _byte.length);
AudioFloat = new Float32Array(BufferData.buffer);
//=====================playback=====================
if(AudioFloat.length > 0) StreamAudio(SourceChannels, AudioFloat.length, SourceSampleRate, AudioFloat);
//=====================playback=====================
ReadyToGetFrame_aud = true;
}
function StreamAudio(NUM_CHANNELS, NUM_SAMPLES, SAMPLE_RATE, AUDIO_CHUNKS) {
var audioBuffer = audioCtx.createBuffer(NUM_CHANNELS, (NUM_SAMPLES / NUM_CHANNELS), SAMPLE_RATE);
for (var channel = 0; channel < NUM_CHANNELS; channel++) {
// This gives us the actual ArrayBuffer that contains the data
var nowBuffering = audioBuffer.getChannelData(channel);
for (var i = 0; i < NUM_SAMPLES; i++) {
var order = i * NUM_CHANNELS + channel;
nowBuffering[i] = AUDIO_CHUNKS[order];
}
}
var source = audioCtx.createBufferSource();
source.buffer = audioBuffer;
source.connect(audioCtx.destination);
source.start(startTime);
startTime += audioBuffer.duration;
}
How to stream PCM audio on HTML without lag?
There is always some lag with digital audio, no matter what you do. This has nothing to do with the web browser itself.
All those data will be streaming to HTML via WebSockets.
Why? The data is only going one direction so you can use a regular HTTP response and not have to worry about the overhead of Web Sockets.
One of my concern is that Web Browsers do not support multi-threading
This isn't really accurate.
It runs real-time but I found some lagging issue like noise...
What your code appears to do is take a PCM frame it receives and play it immediately. This isn't good, as the sound is wrecked if you don't play your received buffers contiguously. You must take the data and schedule it to play immediately after the current data is finished, and not a sample early or too late.
Traditionally this means doing your own buffering and setting up a ScriptProcessorNode to read from those buffers. However, this also requires some DIY resampling because the encoded rate may not be the same as the playback rate.
These days, I think that MediaSource Extensions supports PCM decoding, so you can just pipe your data through that and let the underlying system do all the work for you.

efficient way of streaming a html5 canvas content?

I'm trying to stream the content of a html5 canvas on a live basis using websockets and nodejs.
The content of the html5 canvas is just a video.
What I have done so far is:
I convert the canvas to blob and then get the blob URL and send that URL to my nodejs server using websockets.
I get the blob URL like this:
canvas.toBlob(function(blob) {
url = window.URL.createObjectURL(blob);
});
The blob URLs are generated per video frame (20 frames per second to be exact) and they look something like this:
blob:null/e3e8888e-98da-41aa-a3c0-8fe3f44frt53
I then get that blob URL back from the the server via websockets so I can use it to DRAW it onto another canvas for other users to see.
I did search how to draw onto canvas from blob URL but I couldn't find anything close to what i am trying to do.
So the questions I have are:
Is this the correct way of doing what i am trying to achieve? any
pros and cons would be appreciated.
Is there any other more efficient way of doing this or I'm on a right
path?
Thanks in advance.
EDIT:
I should have mentioned that I cannot use WebRTC in this project and I have to do it all with what I have.
to make it easier for everyone where I am at right now, this how I tried to display the blob URLs that I mentioned above in my canvas using websockets:
websocket.onopen = function(event) {
websocket.onmessage = function(evt) {
var val = evt.data;
console.log("new data "+val);
var canvas2 = document.querySelector('.canvMotion2');
var ctx2 = canvas2.getContext('2d');
var img = new Image();
img.onload = function(){
ctx2.drawImage(img, 0, 0)
}
img.src = val;
};
// Listen for socket closes
websocket.onclose = function(event) {
};
websocket.onerror = function(evt) {
};
};
The issue is that when I run that code in FireFox, the canvas is always empty/blank but I see the blob URLs in my console so that makes me think that what I am doing is wrong.
and in Google chrome, i get Not allowed to load local resource: blob: error.
SECOND EDIT:
This is where I am at the moment.
First option
I tried to send the whole blob(s) via websockets and I managed that successfully. However, I couldn't read it back on the client side for some strange reason!
when I looked on my nodejs server's console, I could see something like this for each blob that I was sending to the server:
<buffer fd67676 hdsjuhsd8 sjhjs....
Second option:
So the option above failed and I thought of something else which is turning each canvas frame to base64(jpeg) and send that to the server via websockets and then display/draw those base64 image onto the canvas on the client side.
I'm sending 24 frames per second to the server.
This worked. BUT the client side canvas where these base64 images are being displayed again is very slow and and its like its drawing 1 frame per second. and this is the issue that i have at the moment.
Third option:
I also tried to use a video without a canvas. So, using WebRTC, I got the video Stream as a single Blob. but I'm not entiely sure how to use that and send it to the client side so people can see it.
IMPORTANT: this system that I am working on is not a peer to peer connection. its just a one way streaming that I am trying to achieve.
The most natural way to stream a canvas content: WebRTC
OP made it clear that they can't use it, and it may be the case for many because,
Browser support is still not that great.
It implies to have a MediaServer running (at least ICE+STUN/TURN, and maybe a gateway if you want to stream to more than one peer).
But still, if you can afford it, all you need then to get a MediaStream from your canvas element is
const canvas_stream = canvas.captureStream(minimumFrameRate);
and then you'd just have to add it to your RTCPeerConnection:
pc.addTrack(stream.getVideoTracks()[0], stream);
Example below will just display the MediaStream to a <video> element.
let x = 0;
const ctx = canvas.getContext('2d');
draw();
startStream();
function startStream() {
// grab our MediaStream
const stream = canvas.captureStream(30);
// feed the <video>
vid.srcObject = stream;
vid.play();
}
function draw() {
x = (x + 1) % (canvas.width + 50);
ctx.fillStyle = 'white';
ctx.fillRect(0,0,canvas.width,canvas.height);
ctx.fillStyle = 'red';
ctx.beginPath();
ctx.arc(x - 25, 75, 25, 0, Math.PI*2);
ctx.fill();
requestAnimationFrame(draw);
}
video,canvas{border:1px solid}
<canvas id="canvas">75</canvas>
<video id="vid" controls></video>
The most efficient way to stream a live canvas drawing: stream the drawing operations.
Once again, OP said they didn't want this solution because their set-up doesn't match, but might be helpful for many readers:
Instead of sending the result of the canvas, simply send the drawing commands to your peers, which will then execute these on their side.
But this approach has its own caveats:
You will have to write your own encoder/decoder to pass the commands.
Some cases might get hard to share (e.g external media would have to be shared and preloaded the same way on all peers, and the worse case being drawing an other canvas, where you'd have to also have shared its own drawing process).
You may want to avoid intensive image processing (e.g ImageData manipulation) to be done on all peers.
So a third, definitely less performant way to do it, is like OP tried to do:
Upload frames at regular interval.
I won't go in details in here, but keep in mind that you are sending standalone image files, and hence a whole lot more data than if it had been encoded as a video.
Instead, I'll focus on why OP's code didn't work?
First it may be good to have a small reminder of what is a Blob (the thing that is provided in the callback of canvas.toBlob(callback)).
A Blob is a special JavaScript object, which represents binary data, generally stored either in browser's memory, or at least on user's disk, accessible by the browser.
This binary data is not directly available to JavaScript though. To be able to access it, we need to either read this Blob (through a FileReader or a Response object), or to create a BlobURI, which is a fake URI, allowing most APIs to point at the binary data just like if it was stored on a real server, even though the binary data is still just in the browser's allocated memory.
But this BlobURI being just a fake, temporary, and domain restricted path to the browser's memory, can not be shared to any other cross-domain document, application, and even less computer.
All this to say that what should have been sent to the WebSocket, are the Blobs directly, and not the BlobURIs.
You'd create the BlobURIs only on the consumers' side, so that they can load these images from the Blob's binary data that is now in their allocated memory.
Emitter side:
canvas.toBlob(blob=>ws.send(blob));
Consumer side:
ws.onmessage = function(evt) {
const blob = evt.data;
const url = URL.createObjectURL(blob);
img.src = url;
};
But actually, to even better answer OP's problem, a final solution, which is probably the best in this scenario,
Share the video stream that is painted on the canvas.

Is it possible to merge multiple webm blobs/clips into one sequential video clientside?

I already looked at this question -
Concatenate parts of two or more webm video blobs
And tried the sample code here - https://developer.mozilla.org/en-US/docs/Web/API/MediaSource -- (without modifications) in hopes of transforming the blobs into arraybuffers and appending those to a sourcebuffer for the MediaSource WebAPI, but even the sample code wasn't working on my chrome browser for which it is said to be compatible.
The crux of my problem is that I can't combine multiple blob webm clips into one without incorrect playback after the first time it plays. To go straight to the problem please scroll to the line after the first two chunks of code, for background continue reading.
I am designing a web application that allows a presenter to record scenes of him/herself explaining charts and videos.
I am using the MediaRecorder WebAPI to record video on chrome/firefox. (Side question - is there any other way (besides flash) that I can record video/audio via webcam & mic? Because MediaRecorder is not supported on not Chrome/Firefox user agents).
navigator.mediaDevices.getUserMedia(constraints)
.then(gotMedia)
.catch(e => { console.error('getUserMedia() failed: ' + e); });
function gotMedia(stream) {
recording = true;
theStream = stream;
vid.src = URL.createObjectURL(theStream);
try {
recorder = new MediaRecorder(stream);
} catch (e) {
console.error('Exception while creating MediaRecorder: ' + e);
return;
}
theRecorder = recorder;
recorder.ondataavailable =
(event) => {
tempScene.push(event.data);
};
theRecorder.start(100);
}
function finishRecording() {
recording = false;
theRecorder.stop();
theStream.getTracks().forEach(track => { track.stop(); });
while(tempScene[0].size != 1) {
tempScene.splice(0,1);
}
console.log(tempScene);
scenes.push(tempScene);
tempScene = [];
}
The function finishRecording gets called and a scene (an array of blobs of mimetype 'video/webm') gets saved to the scenes array. After it gets saved. The user can then record and save more scenes via this process. He can then view a certain scene using this following chunk of code.
function showScene(sceneNum) {
var sceneBlob = new Blob(scenes[sceneNum], {type: 'video/webm; codecs=vorbis,vp8'});
vid.src = URL.createObjectURL(sceneBlob);
vid.play();
}
In the above code what happens is the blob array for the scene gets turning into one big blob for which a url is created and pointed to by the video's src attribute, so -
[blob, blob, blob] => sceneBlob (an object, not array)
Up until this point everything works fine and dandy. Here is where the issue starts
I try to merge all the scenes into one by combining the blob arrays for each scene into one long blob array. The point of this functionality is so that the user can order the scenes however he/she deems fit and so he can choose not to include a scene. So they aren't necessarily in the same order as they were recorded in, so -
scene 1: [blob-1, blob-1] scene 2: [blob-2, blob-2]
final: [blob-2, blob-2, blob-1, blob-1]
and then I make a blob of the final blob array, so -
final: [blob, blob, blob, blob] => finalBlob
The code is below for merging the scene blob arrays
function mergeScenes() {
scenes[scenes.length] = [];
for(var i = 0; i < scenes.length - 1; i++) {
scenes[scenes.length - 1] = scenes[scenes.length - 1].concat(scenes[i]);
}
mergedScenes = scenes[scenes.length - 1];
console.log(scenes[scenes.length - 1]);
}
This final scene can be viewed by using the showScene function in the second small chunk of code because it is appended as the last scene in the scenes array. When the video is played with the showScene function it plays all the scenes all the way through. However, if I press play on the video after it plays through the first time, it only plays the last scene.
Also, if I download and play the video through my browser, the first time around it plays correctly - the subsequent times, I see the same error.
What am I doing wrong? How can I merge the files into one video containing all the scenes? Thank you very much for your time in reading this and helping me, and please let me know if I need to clarify anything.
I am using a element to display the scenes
The file's headers (metadata) should only be appended to the first chunk of data you've got.
You can't make an new video file by just pasting one after the other, they've got a structure.
So how to workaround this ?
If I understood correctly your problem, what you need is to be able to merge all the recorded videos, just like if it were only paused.
Well this can be achieved, thanks to the MediaRecorder.pause() method.
You can keep the stream open, and simply pause the MediaRecorder. At each pause event, you'll be able to generate a new video containing all the frames from the beginning of the recording, until this event.
Here is an external demo because stacksnippets don't works well with gUM...
And if ever you needed to also have shorter videos from between each resume and pause events, you could simply create new MediaRecorders for these smaller parts, while keeping the big one running.

How to capture a photo from the webcam in the browser and save it in the server?

I have seen this done by many websites, but I wonder how they do it. Some even allow one to crop the image. Is there a standard library or package for this?
You don't need any library, because It could be done in several steps. I assume you are familiar with webcam and able to show signal from it in the Video object. If you don't, in short It reads as:
var video: Video = new Video();
addChild(video);
video.smoothing = true;
video.attachCamera(camera); //Camera reference
video.width = someWidth;
video.height = someHeight;
Because Video object implements IBitmapDrawable you can draw it in the Bitmap, and do whatever you want.
var bitmapData : BitmapData = new BitmapData(_video.width, _video.height);
//Tada! You have screenshot of the current frame from video object
bitmapData.draw(cameraView);
//For testing, add as Bitmap
addChild(new Bitmap(bitmapData));
As for sending to the server, you need some server-side implementation
Here is a very usefull blog i came across (not mine)
http://matthewschrager.com/2013/05/25/how-to-take-webcam-pictures-from-browser-and-store-server-side/

JavaScript FileReader using a lot of memory

I have a problem with my little project.
Every time the music player is loading new songs into playlist or you are pressing a song on the list to get it playing, it's using a lot of memory, and it stays high until you shut it down. I think its every time I'm using the filereader API that it uses memory, but I'm also loading ID3 information with the jDataView.js script which I also think is taking a lot of memory.
Do you guys have any suggestion, to load,store and play songs with the FileReader, without taking up memory? I've tried to see if it was possible to clear the fileReader after using, but I couldn't find anything. I've only tested in Chrome.
UPDATE:
I have tested my project,and found out, that its when im trying to load the datastring it takes up memory.
reader.onloadend = function(evt) {
if(typeof(e) != "undefined"){
e.pause();
}
e = new Audio();
e.src = evt.target.result; // evt.target.result call takes the memory
e.setAttribute("type", songs[index]["file"].type);
e.play();
e.addEventListener("ended", function() { LoadAudioFile(index + 1) }, false);
};
Is there another way to load the data into the audio element?
This is not because of FileReader but because you are making the src attribute of audio element a 1.33 * mp3filesize string. So instead of the src attribute being a nice short url pointing to a mp3 resource, it's the whole mp3 file in base64 encoding. It's a wonder your browser didn't crash.
You should not read the file with FileReader at all, but create a blob URL from the file and use that as src.
var url = window.URL || window.webkitURL;
//Src will be like "blob:http%3A//stackoverflow.com/d13eb575-4863-4f86-8727-6400119f4afc"
//A very short string that is pointing to the original resource in hard drive
var src = url.createObjectURL( mp3filereference );
audioElement.src = src;

Categories

Resources