I have written an OpenGL game and I want to allow remote playing of the game through a canvas element. Input is easy, but video is hard.
What I am doing right now is launching the game via node.js and in my rendering loop I am sending to stdout a base64 encoded stream of bitmap data representing the current frame. The base64 frame is sent via websocket to the client page, and rendered (painstakingly slowly) pixel by pixel. Obviously this can't stand.
I've been kicking around the idea of trying to generate a video stream and then I can easily render it onto a canvas through a tag (ala http://mrdoob.github.com/three.js/examples/materials_video.html).
The problem I'm having with this idea is I don't know enough about codecs/streaming to determine at a high level if this is actually possible? I'm not sure if even the codec is the part that I need to worry about being able to have the content dynamically changed, and possibly on rendered a few frames ahead.
Other ideas I've had:
Trying to create an HTMLImageElement from the base64 frame
Attempting to optimize compression / redraw regions so that the pixel bandwidth is much lower (seems unrealistic to achieve the kind of performance I'd need to get 20+fps).
Then there's always the option of going flash...but I'd really prefer to avoid it. I'm looking for some opinions on technologies to pursue, ideas?
Try transforming RGB in YCbCr color space and stream pixel values as:
Y1 Y2 Y3 Y4 Y5 .... Cb1 Cb2 Cb3 Cb4 Cb5 .... Cr1 Cr2 Cr3 Cr4 Cr5 ...
There would be many repeating patterns, so any compressing algorithm will compress it better then RGBRGBRBG sequence.
http://en.wikipedia.org/wiki/YCbCr
Why base64 encode the data? I think you can push raw bytes over a WebSocket
If you've got a linear array of RGBA values in the right format you can dump those straight into an ImageData object for subsequent use with a single ctx.putImageData() call.
Related
I'm getting one base64 string from API response and other one I'm converted image (which is in test data file) to base64 using cypress readfile method.
When I'm using below command the assertion is failing because there is tracking number difference which will be always new with every call.
And I'm getting 2 different base64.
//This base64 is from API response
var base64FromAPI =
res.body.completedShipments[0].completedPackages[0].documents[0].image;
//Image is picked from Test Data file and converts to base64
cy.readFile(`cypress/e2e/Testdata/Canada Post 02StoreId.pdf`, "base64").should(
"eq",
base64FromAPI
);
Because there is tracking number on the label(image) which will be generated from API response is always different.
Is there any other way to compare base64 strings and to ignore some % of difference while comparing in cypress or javascript.
Or is there any other way to do this.
Thanks in advance.
Essentially you can't do this at the base64 level. The differences in a raw bitstream like base64 are totally meaningless. The differences can only become apparent through rendering that image. Actually, what you need to do is pretty complex! I'm assuming it's not possible or a good idea in your use case to change away from having the server add the text to the image, to for example, using DOM to overlay it instead.
If that's the case, the only thing you could do is utilise visual regression testing. With this, you can set a threshold on which a % similarity is defined.
Since the base64 comes from the API. This would probably mean also having test code that injects an img tag with the base64 as the source, so you can allow the visual snapshot to take place.
This works at the level of image analysis rather than on the actual bitstream. Internally it will render and compare the images.
Another way I can think of, though this is quite complex and I wouldn't pursue it unless the above did not work is to:
Use image manipulation libraries to load the base64 into an actual rendered image in memory.
Try to cut away/crop the superimposed text using image manipulation libraries in order to reliably remove areas of difference.
Base 64 that.
Compare that to a known stable base64 of the "rest" of the image.
I want to analyze the frequencies coming from the microphone input with a resolution of <1Hz in browser.
The normal Web Audio AnalyzerNode has a maximum FFT_SIZE of 32768. This results in a resolution of ~1.4Hz for normal samplerates (48kHz).
Now I want to use jsfft or something similar to do the frequency transform. I want to collect 65536 audio samples as this fft size should reach a resolution of ~0.7Hz. (Time resolution is not that important)
Unfortunately the ScriptProcessorNode also only has a maximum bufferSize of 16384 I want to combine 4 of its buffers to one Float32Array.
I thought that there will be someting like
copyChannelData(array, offset, length)
but there is only
getChannelData(array)
So if I understand right I would have to copy all the data in my bigger array before I can do the fft.
Just to be sure I don't miss anything...Is there a way to retrieve the data directly into my bigger array?
No, you will need to copy the data. This method is pretty inefficient anyway (ScriptProcessor, I mean) - the copy is not the worst of your worries, since you are fundamentally going to need to copy that data.
program-image
I'm doing made windows RDP(Remote Desktop Protocol) Web Browser.
I want to stream ImageData(Uint8ClampedArray) to browser with live (every 100ms), and
i success it with using server-side render(node-canvas).
but current my source code is very low-performance. because CPU render can't handle it. (it too much many.) so i want to try GPU parallel computing using WebGL in client. (is it possible?)
i made first time like this, (i'll skip description the authentication procedure)
Serverside
step 1: hook the rdp imageData's (to compressed RLE algorithm)
step 2: decompress imagedata(it compressed RLE algorithm)(make Uint8ClampedArray)
step 3: canvas putImageData
step 4: get the dataUrl and cut 'data:image/png;base64'
step 5: make the buffer base64, so it's same image file buffer, save to express.
step 5: expree will be make image url (like a https://localhost/10-1.png#timestamp)
step 5: send imageurl to client using socket.io
Client-side
step 1: when the site load up, create 64x64 image tag's in div (like a google map)
step 2: receive the image url and get Image coordinate using parse image name('10-1.png'->x:640,y:64)
step 3: change image tag src to received image url.
it current performance is bad (actually not so bad when the resolution size is small).
Question
Is there any way Uint8ClampedArray imagedata to texture using three.js?
is it possible RLE algorithm compressed data extract using GPU in three.js?
I don't belive there is a way to directly draw imagedata using three.js.
But as I see it you have other options:
First of all I don't get way you are not just sending jpg or png data via websockets. Then you could actually use the png and draw it as a sprite with in three.js.
That said, I dont think that the bottleneck is the actual drawing of the data to the canvas and even if it were, webgl wont help you with that. I just tried it using webgl vs just using putImageData(). For a HD Image (1920x1080) it took on average of 14ms with putImageData() and 70ms with WebGL out of 1000 drawings. Now when you need to loop through the pixels because you want to do something in terms of image processing like edge detection than it is a whole different story. There webgl will be on top for sure. And here is why: WebGl is using the GPU that means when you want to do something with the image you first have to load all the data to the gpu which is rather slow. But processing the data is quite fast. Much faster then looping through the pixels of the image in javascript.
So in conclusion I would say your best bet is to send png images via websockets using arraybuffers. On client side draw it to a canvas element.
Here is a link to the webgl stuff that explains how you could use it for image processing: WebGl Fundamentals: Image Processing
I need to alter the palette data in PNG images using JavaScript. I would like to do this without using WebGL and drawing to canvass, since this can be... inefficient, and cause slowdown. However, I'm not experienced with working with data compression and I know PNGs use compression.
I have three questions. One, do I need to fully decompress the image, or is it possible to only decompress parts up to the PLTE chunk(s) then re-compress the image?
Two, can JavaScript even work with raw binary data, or will I need to get really creative with base64 and string manipulation?
Third, since I'm working with palette and not truecolor images and so don't need to handle IDAT chunks... are the previous chunks actually compressed? Is this going to even require me to decompress the image?
I need to download a BMP with JavaScript and render it to the screen, in Internet Explorer. First off, yes, I know this is insane, I'm not going to get into why, let's just accept for a moment that img src is not working because of security constraints, but an ajax request with the proper authentication in the post will pull back the image. This example bypasses all the security for the sake of simplicity and just proves we can render something.
The best idea I could come up with was to fetch the stream via ajax, decode the bitmap, and then render it with canvas. Internet Explorer obviously doesn't support canvas, but luckily Google provided a wrapper to SVG called excanvas that I can use for that.
My code (drawing code appears to work, bmp decoding not so much)
http://gist.github.com/614328
Future support for other images besides BMP is plausable, and because of how the canvas works it's easiest to draw pixels in RGBA. Texture2D is essentially the wrapper class for an RGBA byte array, plus the drawing code. ByteStream makes it a bit easier on the eyes dealing with the byte array, and BitmapDecoder contains the method to translate the BGR format to RGBA texture2d for drawing.
Is it possible the bytes are getting mis-translated along the way or is there something the matter with my decoding logic?
FYI, I got the file spec from wikipedia:
http://en.wikipedia.org/wiki/BMP_file_format#Bitmap_Information_.28DIB_header.29
Any idea what's going on in the decoding logic or drawing logic that's causing my BMP to draw incorrectly?
XMLHttpRequest (aka AJAX) was primarily designed for text content, so it's possible that binary data (especially null characters) aren't translated correctly. The first check would be to compare the size of retrieved data with the actual file size.
At least on Firefox, there seems to be a way to specifically retrieve binary data, as described here: Handling binary data.
Here's a much easier (and vastly more performant) approach: base64 encode the BMP data (you can do this either on the server or the client) and then embed it in the page using a data URI:
<script type="text/javascript">
function fetchBmp() {
$.get('http://localhost:3168/experimental/imgrender/beta.bmp', function (data) {
var base64Data = $.base64.encode(data); // *
$('#my-image').attr('src', 'data:image/bmp;base64,' + base64Data);
});
}
// * Lots of plugins for this, e.g. http://github.com/carlo/jquery-base64
</script>
<img id="my-image" />
All modern browsers support data URIs (including IE8 and up--for IE7 workarounds exist) as well as the BMP format.
As casablanca points out, there may be issues with loading binary data via Ajax, so you may have to google around for workarounds.
The fix was a combination of two things
a bit of VBScript to read the raw bytes of responseBody
decoding the byte data properly, each pixel is not padded as the wikipedia article suggests, it's actually each scanline that is padded to dword size.
Working code:
http://gist.github.com/616240