Using the Chromecast dongle we are trying to show a Bitmap originating from android that changes often onto the receiver. The way we are currently doing it is converting the image to Base64 and then submitting that as the URL. This works but is very slow and seems inefficient. What would be the best way to show a local Bitmap onto the receiver?
Relevant Java code:
Bitmap bitmap1 = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, options);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bitmap1.compress(CompressFormat.JPEG, 85, baos);
String image = "data:image/jpeg;base64,"+Base64.encodeToString(baos.toByteArray(), Base64.DEFAULT);
mMessageStream.loadMedia(image, mMetaData, true);
receiver.html:
<html>
<script src="https://www.gstatic.com/cast/js/receiver/1.0/cast_receiver.js">
</script>
<script type="text/javascript">
var receiver = new cast.receiver.Receiver(
'ACTUAL_APP_ID_HERE', [cast.receiver.RemoteMedia.NAMESPACE],
"",
3);
var remoteMedia = new cast.receiver.RemoteMedia();
remoteMedia.addChannelFactory(
receiver.createChannelFactory(cast.receiver.RemoteMedia.NAMESPACE));
receiver.start();
window.addEventListener('load', function() {
var elem = document.getElementById('vid');
remoteMedia.setMediaElement(elem);
});
</script>
<body>
<img id="vid" style="position:absolute;top:0;left:0;height:100%;width:100%" />
</body>
</html>
This is probably not the best method, but I have my app start a local web server, and send the correct link to the receiver. On an http request, the server then streams the file from the disk. I haven't tested to see how fast/often it can be changed.
Use Base64OutputStream with other streams, right now you are just loading whole image into memory which is very bad and will give you out of memory error sooner or later.
Related
I have a simple need for a preview of image data that is updated regularly in realtime on the server side. The server can easily generate Base64 encoded JPEG images. I have JavaScript that fetches the current data and sets the src attribute of an image.
This appears to work fine when testing with the server running locally (i.e. URLs are to localhost), however it seems that when client and server are on different machines, after some time the browser gets bogged down and can't handle it anymore. It seems like old image data is hanging around and getting garbage collected or something similar is causing a slow down and ultimately a failure to reliably update the image on the browser side.
Here is the code:
<html lang="en">
<head>
<title>Live Stream</title>
</head>
<body>
<div>
<img id="img1" width="320" height="180" src="data:," />
</div>
<script type="text/javascript">
var fetching = false;
const fetchImage = async () => {
fetching = true;
try {
const response = await fetch(document.baseURI+'app/image/base64');
var frame = await response.text();
document.getElementById("img1").src = "data:image/jpeg;base64,"+frame;
} catch(e) {
console.log(e);
}
fetching = false;
}
function updateImage() {
fetchImage();
}
window.requestAnimationFrame( draw );
function draw(timestamp) {
if( !fetching ) {
updateImage();
}
window.requestAnimationFrame( draw );
}
</script>
</body>
</html>
Am I doing something wrong? Is there a better way? Low-latency is important.
I also have an alternative that uses WebSockets to push data. That might make more sense with the data rate is lower. Either way, the display of the data is handled the same way. by updating the src of an with a new 'data:' URL.
I'm trying to stream the content of a html5 canvas on a live basis using websockets and nodejs.
The content of the html5 canvas is just a video.
What I have done so far is:
I convert the canvas to blob and then get the blob URL and send that URL to my nodejs server using websockets.
I get the blob URL like this:
canvas.toBlob(function(blob) {
url = window.URL.createObjectURL(blob);
});
The blob URLs are generated per video frame (20 frames per second to be exact) and they look something like this:
blob:null/e3e8888e-98da-41aa-a3c0-8fe3f44frt53
I then get that blob URL back from the the server via websockets so I can use it to DRAW it onto another canvas for other users to see.
I did search how to draw onto canvas from blob URL but I couldn't find anything close to what i am trying to do.
So the questions I have are:
Is this the correct way of doing what i am trying to achieve? any
pros and cons would be appreciated.
Is there any other more efficient way of doing this or I'm on a right
path?
Thanks in advance.
EDIT:
I should have mentioned that I cannot use WebRTC in this project and I have to do it all with what I have.
to make it easier for everyone where I am at right now, this how I tried to display the blob URLs that I mentioned above in my canvas using websockets:
websocket.onopen = function(event) {
websocket.onmessage = function(evt) {
var val = evt.data;
console.log("new data "+val);
var canvas2 = document.querySelector('.canvMotion2');
var ctx2 = canvas2.getContext('2d');
var img = new Image();
img.onload = function(){
ctx2.drawImage(img, 0, 0)
}
img.src = val;
};
// Listen for socket closes
websocket.onclose = function(event) {
};
websocket.onerror = function(evt) {
};
};
The issue is that when I run that code in FireFox, the canvas is always empty/blank but I see the blob URLs in my console so that makes me think that what I am doing is wrong.
and in Google chrome, i get Not allowed to load local resource: blob: error.
SECOND EDIT:
This is where I am at the moment.
First option
I tried to send the whole blob(s) via websockets and I managed that successfully. However, I couldn't read it back on the client side for some strange reason!
when I looked on my nodejs server's console, I could see something like this for each blob that I was sending to the server:
<buffer fd67676 hdsjuhsd8 sjhjs....
Second option:
So the option above failed and I thought of something else which is turning each canvas frame to base64(jpeg) and send that to the server via websockets and then display/draw those base64 image onto the canvas on the client side.
I'm sending 24 frames per second to the server.
This worked. BUT the client side canvas where these base64 images are being displayed again is very slow and and its like its drawing 1 frame per second. and this is the issue that i have at the moment.
Third option:
I also tried to use a video without a canvas. So, using WebRTC, I got the video Stream as a single Blob. but I'm not entiely sure how to use that and send it to the client side so people can see it.
IMPORTANT: this system that I am working on is not a peer to peer connection. its just a one way streaming that I am trying to achieve.
The most natural way to stream a canvas content: WebRTC
OP made it clear that they can't use it, and it may be the case for many because,
Browser support is still not that great.
It implies to have a MediaServer running (at least ICE+STUN/TURN, and maybe a gateway if you want to stream to more than one peer).
But still, if you can afford it, all you need then to get a MediaStream from your canvas element is
const canvas_stream = canvas.captureStream(minimumFrameRate);
and then you'd just have to add it to your RTCPeerConnection:
pc.addTrack(stream.getVideoTracks()[0], stream);
Example below will just display the MediaStream to a <video> element.
let x = 0;
const ctx = canvas.getContext('2d');
draw();
startStream();
function startStream() {
// grab our MediaStream
const stream = canvas.captureStream(30);
// feed the <video>
vid.srcObject = stream;
vid.play();
}
function draw() {
x = (x + 1) % (canvas.width + 50);
ctx.fillStyle = 'white';
ctx.fillRect(0,0,canvas.width,canvas.height);
ctx.fillStyle = 'red';
ctx.beginPath();
ctx.arc(x - 25, 75, 25, 0, Math.PI*2);
ctx.fill();
requestAnimationFrame(draw);
}
video,canvas{border:1px solid}
<canvas id="canvas">75</canvas>
<video id="vid" controls></video>
The most efficient way to stream a live canvas drawing: stream the drawing operations.
Once again, OP said they didn't want this solution because their set-up doesn't match, but might be helpful for many readers:
Instead of sending the result of the canvas, simply send the drawing commands to your peers, which will then execute these on their side.
But this approach has its own caveats:
You will have to write your own encoder/decoder to pass the commands.
Some cases might get hard to share (e.g external media would have to be shared and preloaded the same way on all peers, and the worse case being drawing an other canvas, where you'd have to also have shared its own drawing process).
You may want to avoid intensive image processing (e.g ImageData manipulation) to be done on all peers.
So a third, definitely less performant way to do it, is like OP tried to do:
Upload frames at regular interval.
I won't go in details in here, but keep in mind that you are sending standalone image files, and hence a whole lot more data than if it had been encoded as a video.
Instead, I'll focus on why OP's code didn't work?
First it may be good to have a small reminder of what is a Blob (the thing that is provided in the callback of canvas.toBlob(callback)).
A Blob is a special JavaScript object, which represents binary data, generally stored either in browser's memory, or at least on user's disk, accessible by the browser.
This binary data is not directly available to JavaScript though. To be able to access it, we need to either read this Blob (through a FileReader or a Response object), or to create a BlobURI, which is a fake URI, allowing most APIs to point at the binary data just like if it was stored on a real server, even though the binary data is still just in the browser's allocated memory.
But this BlobURI being just a fake, temporary, and domain restricted path to the browser's memory, can not be shared to any other cross-domain document, application, and even less computer.
All this to say that what should have been sent to the WebSocket, are the Blobs directly, and not the BlobURIs.
You'd create the BlobURIs only on the consumers' side, so that they can load these images from the Blob's binary data that is now in their allocated memory.
Emitter side:
canvas.toBlob(blob=>ws.send(blob));
Consumer side:
ws.onmessage = function(evt) {
const blob = evt.data;
const url = URL.createObjectURL(blob);
img.src = url;
};
But actually, to even better answer OP's problem, a final solution, which is probably the best in this scenario,
Share the video stream that is painted on the canvas.
I've seen many partial answers to this here and elsewhere, but I am very much a novice coder and am hoping for a thorough solution. I have been able to set up recording audio from a laptop mic in Chrome Canary (v. 29.x) and can, using recorder.js, relatively easily set up recording a .wav file and saving that locally, a la:
http://webaudiodemos.appspot.com/AudioRecorder/index.html
But I need to be able to save the file onto a Linux server I have running. It's the actual sending of the blob recorded data to the server and saving it out as a .wav file that's catching me up. I don't have the requisite PHP and/or AJAX knowledge about how to save the blob to a URL and to deal, as I have been given to understand, with binaries on Linux that make saving that .wav file challenging indeed. I'd greatly welcome any pointers in the right direction.
Client side JavaScript function to upload the WAV blob:
function upload(blob) {
var xhr=new XMLHttpRequest();
xhr.onload=function(e) {
if(this.readyState === 4) {
console.log("Server returned: ",e.target.responseText);
}
};
var fd=new FormData();
fd.append("that_random_filename.wav",blob);
xhr.open("POST","<url>",true);
xhr.send(fd);
}
PHP file upload_wav.php:
<?php
// get the temporary name that PHP gave to the uploaded file
$tmp_filename=$_FILES["that_random_filename.wav"]["tmp_name"];
// rename the temporary file (because PHP deletes the file as soon as it's done with it)
rename($tmp_filename,"/tmp/uploaded_audio.wav");
?>
after which you can play the file /tmp/uploaded_audio.wav.
But remember! /tmp/uploaded_audio.wav was created by the user www-data, and (by PHP default) is not readable by the user. To automate adding the appropriate permissions, append the line
chmod("/tmp/uploaded_audio.wav",0755);
to the end of the PHP (before the PHP end tag ?>).
Hope this helps.
Easiest way, if you just want to hack that code, is go in to recorderWorker.js, and hack the exportWAV() function to something like this:
function exportWAV(type){
var bufferL = mergeBuffers(recBuffersL, recLength);
var bufferR = mergeBuffers(recBuffersR, recLength);
var interleaved = interleave(bufferL, bufferR);
var dataview = encodeWAV(interleaved);
var audioBlob = new Blob([dataview], { type: type });
var xhr=new XMLHttpRequest();
xhr.onload=function(e) {
if(this.readyState === 4) {
console.log("Server returned: ",e.target.responseText);
}
};
var fd=new FormData();
fd.append("that_random_filename.wav",audioBlob);
xhr.open("POST","<url>",true);
xhr.send(fd);
}
Then that method will save to server from inside the worker thread, rather than pushing it back to the main thread. (The complex Worker-based mechanism in RecorderJS is because a large encode should be done off-thread.)
Really, ideally, you'd just use a MediaRecorder today, and let it do the encoding, but that's a whole 'nother ball of wax.
I am retrieving multiple encrypted data with the help of some ajax queries, and performing some manipulation to transform all theses encrypted chunks into a valid video. And now that I have the binary of the video in memory, I am stuck. How can I display it ?
Just to be sure, I have replicated all theses steps on the server side, and the final output is really a playable video. So my only problem is to display my javascript binary object as a video.
I am doing my best to use only web technologies (html5, video tag, javascript) and I would like to avoid developing my own custom player in flash, which is my very last solution.
if you have an idea, I'm interested. For my part, I am out of imagination.
Here's a quick example that just uses a file input instead of the AJAX you'd normally be using. Note that the first input is wired up to a function that will read the file and return a dataURL for it.
However, since you don't have a fileObject, but instead have a stream of data that represents the contents of the file, you can't use this method. So, I've included a second input, which is wired up to a function that just loads the file as a binary string. This string is then base64 encoded 'manually' with a browser function, before being turned into a dataURL. To do this,you need to know what type of file you're dealing with in order to construct the URL correctly.
It's fairly slow to load even on this laptop i7 and probably sucks memory like no-one's business - mobile phones will likely fall over in a stupor (I haven't tested with one)
You should be able to get your data-stream and continue on from the point where I have the raw data (var rawResult = evt.target.result;)
Error checking is left as an exercise for the reader.
<!DOCTYPE html>
<html>
<head>
<script>
"use strict";
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
function byId(id,parent){return (parent == undefined ? document : parent).getElementById(id);}
// callback gets data via the .target.result field of the param passed to it.
function loadFileObject(fileObj, loadedCallback)
{
var reader = new FileReader();
reader.onload = loadedCallback;
reader.readAsDataURL( fileObj );
}
// callback gets data via the .target.result field of the param passed to it.
function loadFileAsBinary(fileObj, loadedCallback)
{
var reader = new FileReader();
reader.onload = loadedCallback;
reader.readAsBinaryString( fileObj );
}
window.addEventListener('load', onDocLoaded, false);
function onDocLoaded()
{
byId('fileInput1').addEventListener('change', onFileInput1Changed, false);
byId('fileInput2').addEventListener('change', onFileInput2Changed, false);
}
function onFileInput1Changed(evt)
{
if (this.files.length != 0)
{
var curFile = this.files[0];
loadFileObject(curFile, onVideoFileReadAsURL);
function onVideoFileReadAsURL(evt)
{
byId('vidTgt').src = evt.target.result;
byId('vidTgt').play();
}
}
}
function onFileInput2Changed(evt)
{
if (this.files.length != 0)
{
var curFile = this.files[0];
loadFileAsBinary(curFile, onVideoFileReadAsBinary);
function onVideoFileReadAsBinary(evt)
{
var rawResult = evt.target.result;
var b64Result = btoa(rawResult);
var prefaceString = "data:" + curFile.type + ";base64,";
// byId('vidTgt').src = "data:video/mp4;base64," + b64Result;
byId('vidTgt').src = prefaceString + b64Result;
byId('vidTgt').play();
}
}
}
</script>
<style>
</style>
</head>
<body>
<input type='file' id='fileInput1'/>
<input type='file' id='fileInput2'/>
<video id='vidTgt' src='vid/The Running Man.mp4'/>
</body>
</html>
To display your video you would need to get an URL for it so that you are able to pass a reference to the video element.
There is URL.createObjectURL which should provide you with such an URL to refer to your data. See https://developer.mozilla.org/en-US/docs/Web/API/URL/createObjectURL for further explanations and mind the compatibility table.
Mozilla hosts an example at https://developer.mozilla.org/samples/domref/file-click-demo.html which displays local files. It should be possible to adapt this to setting the video element's src property instead. Depending on how you store your data, it should be possible to play your video this way.
I tried it in Firefox for data from a File object which left me with a URL blob:https://developer.mozilla.org/ed2e4f2f-57a6-4b06-8d56-d0a1a47a9ffd that I could use to play a video.
If I load the nextimg URL manually in the browser, it gives a new picture every time I reload. But this bit of code shows the same image every iteration of draw().
How can I force myimg not to be cached?
<html>
<head>
<script type="text/javascript">
function draw(){
var canvas = document.getElementById('canv');
var ctx = canvas.getContext('2d');
var rx;
var ry;
var i;
myimg = new Image();
myimg.src = 'http://ohm:8080/cgi-bin/nextimg'
rx=Math.floor(Math.random()*100)*10
ry=Math.floor(Math.random()*100)*10
ctx.drawImage(myimg,rx,ry);
window.setTimeout('draw()',0);
}
</script>
</head>
<body onload="draw();">
<canvas id="canv" width="1024" height="1024"></canvas>
</body>
</html>
The easiest way is to sling an ever-changing querystring onto the end:
var url = 'http://.../?' + escape(new Date())
Some people prefer using Math.random() for that instead of escape(new Date()). But the correct way is probably to alter the headers the web server sends to disallow caching.
You can't stop it from caching the image altogether within Javascript. But, you can toy with the src/address of the image to force it to cache anew:
[Image].src = 'image.png?' + (new Date()).getTime();
You can probably take any of the Ajax cache solutions and apply it here.
That actually sounds like a bug in the browser -- you could file at http://bugs.webkit.org if it's in Safari or https://bugzilla.mozilla.org/ for Firefox. Why do i say potential browser bug? Because the browser realises it should not be caching on reload, yet it does give you a cached copy of the image when you request it programmatically.
That said are you sure you're actually drawing anything? the Canvas.drawImage API will not wait for an image to load, and is spec'd to not draw if the image has not completely loaded when you try to use it.
A better practice is something like:
var myimg = new Image();
myimg.onload = function() {
var rx=Math.floor(Math.random()*100)*10
var ry=Math.floor(Math.random()*100)*10
ctx.drawImage(myimg,rx,ry);
window.setTimeout(draw,0);
}
myimg.src = 'http://ohm:8080/cgi-bin/nextimg'
(You can also just pass draw as an argument to setTimeout rather than using a string, which will save reparsing and compiling the same string over and over again.)
There are actually two caches you need to bypass here: One is the regular HTTP cache, that you can avoid by using the correct HTTP headers on the image. But you've also got to stop the browser from re-using an in-memory copy of the image; if it decides it can do that it will never even get to the point of querying its cache, so HTTP headers won't help.
To prevent this, you can use either a changing querystring or a changing fragment identifier.
See my post here for more details.