I need to multiplex a couple streams between the browser and a Node.js server over a single Websocket connection. One stream is going to be used for sending binary data from the browser to the server, and the other is going to be used for a simple RPC.
I stumbled across BinaryJS which does exactly what I want. However, it has a specific problem with binary data and doesn't appear to be maintained regularly. Is there an alternative? My requirements:
Binary-compatible (no JSON serialization of binary data... that takes a ton of bandwidth)
Supports multiple, bidirectional streams
I actually don't care so much about browser support. My application relies on other modern APIs, so I'm only targeting current versions of Chrome and Firefox. Any ideas?
Brad I fixed the typed array issue with BinaryJS you were experiencing (in version 0.2.0). But you're right I haven't had much time to maintain it so you may run into other issues.
Related
I have very large (50-500GB) files for download, and the single-threaded download offered by the browser engine (edge, chrome, or firefox) is a painfully slow user experience. I had hoped to speed this up by using multithreading to download chunks of the file, but I keep running into browser sandbox issues.
So far the best approach I've found would be to download and stuff all the chunks into localStorage and then download that as a blob, but I'm concerned about the soft limits on storing that much data locally (as well as the performance of that approach when it comes to stitching all the data together).
Ideally, someone has already solved this (and my search skills weren't up to the task of finding it). The only thing I have found have been server-side solutions (which have straightforward file system access). Alternately, I'd like another approach less likely to trip browser security or limit dialogs and more likely to provide the performance my users are seeking.
Many thanks!
One cannot. Browsers intentionally limit the number of connections to a website. To get around this limitation with today’s browsers requires a plugin or other means to escape the browser sandbox.
Worse, because of a lack of direct file system access, the data from multiple downloads has to be cached and then reassembled into the final file, instead of having multiple writers to the same file (and letting the OS cache handle optimization).
TLDR: Although it is possible to have multiple download threads, the maximum is low (4), and the data has to be handled repeatedly. Use a plugin or an actual download program such as FTP or Curl.
I have a project which needs live updates on certain parts of the website, this done with websockets. On other parts of the site I use POST/GET. I just came to think of it, is there any reasons not to use only websockets? What could I gain/loose by dropping POST/GET?
Browser support for Web Sockets is good in current versions. But the answer very much depends on your project.
You'll have to manage the websocket, in case it closes unexpectedly. A GET/POST is easier in that regard. You can just set an interval and if one goes missing, there's always the next request.
If it's not a critical feature, and if your audience is skewed towards chrome/firefox/safari, and your websocket implementation is solid already, i'd drop GET/POST.
Personally I think the biggest disadvantage would be browser support. Websockets have great support in Chrome and Firefox but only recently came to Internet Explorer. By completely relying on websockets unless you had a fallback you'd be cutting off some older/mobile browsers.
As kidshenlong already mentioned, the biggest problem would be browser support. You should also consider, though, that an open websocket uses up resources (mostly memory) on your server for each client that is currently connected.
I would like to see if it's possible to have direct access to Opus using getUserMedia or anything similar from the latest browsers.
I've been researching on it a lot but with no Good results.
I'm aware that either Opus or Speex are actually used in webkitSpeechRecognition API. I would like to do speech recognition but using my own server rather than Google's.
So there are a lot of suggestions about Emscripten but nobody did, so I ported the encoder opus-tools to JavaScript using Emscripten. Dependent on what one has in mind, there are now the following opportunities:
Encoding FLAC, WAVE, AIFF, RAW files || demo || Web Worker size: 1.3 MiB
Encoding raw stuff for immediately processing or sending without container || demo || Web Worker size: 0.6 MiB
Encoding to Ogg-Opus and WAV from getUserMedia stream
When using Mozilla Firefox, it's possible to use a MediaRecorder, which would also allow to convert arbitrary sound files into Opus format on supported platforms together with AudioContext.decodeAudioData()
We're using emscripten for encoding and decoding using gsm610 with getUserMedia, and it works incredibly well, even on mobile devices. These days javascript gives almost native performance, so emscripten is viable for compiling codecs. The only issue is potentially very large .js files, so you want to only compile the parts you are using.
Unfortunately, it isn't currently possible to access browser codecs directly from JavaScript for encoding. The only way to do it would be to utilize WebRTC and set up recording on the server. I've tried this by compiling libjingle with some other code out of Chromium to get it to run on a Node.js server... it's almost impossible.
The only thing you can do currently is send raw PCM data to your server. This takes up quite a bit of bandwidth, but you can minimize that by converting the float32 samples down to 16 bit (or 8 bit if your speech recognition can handle it).
Hopefully the media recorder API will show up soon so we can use browser codecs.
This is not a complete solution, #Brad's answer is actually the correct one at this time.
One way to do it is to compile Opus to Emscripten and hope that your PC can handle encoding using JavaScript. Another alternative is to use speex.js.
I have an embedded system which is running the LwIP server(v1.2), I need to be able to stream a data array into the javascript on the client side? I'm looking at using chrome and some HTML5 features, so some people have suggested using websockets. Does anyone know where I need to start to use these with the LwIP framework? Any help at all would be much appreciated!
WebSockets is a relatively simple protocol so you could use the protocol spec and write your own server. Since lwIP offers a bsd sockets API, you could also search for existing open source C servers. (A quick search shows up this candidate for instance. BTW, note that this code licensed as GPL. You should only use it if you understand the requirements using GPL'd code puts on your project.)
Note that while Chrome support for websockets is good, support is patchier if you later decide to use other browsers (and particularly to allow users with older browsers). See here for details. If support for a variety of browsers matters to you, you'll probably have to include code in client and server to fallback to long polling when a websocket handshake fails.
I have a camera feed coming into a linux machine using a V4l2 interface as the source for a gstreamer pipeline. I'm building an interface to control the camera, and I would like to do so in HTML/javascript, communicating to a local server. The problem is getting a feed from the gst pipeline into the browser. The options for doing so seem to be:
A loopback from gst to a v4l2 device, which is displayed using flash's webcam support
Outputting a MJPEG stream which is displayed in the browser
Outputting a RTSP stream which is displayed by flash
Writing a browser plugin
Overlaying a native X application over the browser
Has anyone had experience solving this problem before? The most important requirement is that the feed be as close to real time as possible. I would like to avoid flash if possible, though it may not be. Any help would be greatly appreciated.
You already thought about multiple solutions. You could also stream in ogg/vorbis/theora or vp8 to an icecast server, see the OLPC GStreamer wiki for examples.
Since you are looking for a python solution as well (according to your tags), have you considered using Flumotion? It's a streaming server written on top of GStreamer with Twisted, and you could integrate it with your own solution. It can stream over HTTP, so you don't need an icecast server.
Depending on the codecs, there are various tweaks to allow low-latency. Typically, with Flumotion, locally, you could get a few seconds latency, and that can be lowered I believe (x264enc can be tweaked to reach less than a second latency, iirc). Typically, you have to reduce the keyframe distance, and also limit the motion-vector estimation to a few nearby frames: that will probably reduce the quality and raise the bitrate though.
What browsers are you targeting? If you ignore Internet Explorer, you should be able to stream OGG/Theora video and/or WebM video direct to the browser using the tag. If you need to support IE as well though you're probably reduced to a flash applet. I just set up a web stream using Flumotion and the free version of Flowplayer http://flowplayer.org/ and it's working very well. Flowplayer has a lot of advanced functionality that I have barely even begun to explore.