Javascript real-time voice streaming and processing it in django backend - javascript

Hi I'm currently working on a project where I want to stream users' voice, using js, in realtime - from user's perspective, think Google's speech recognition API demo.
So far I tried few jquery libraries but they doesn't seem to work like I expected - there was either no compatibility with web browser, they couldn't detect microphone or sending to server failed.
Recently, I was exploring webrtc and it seems it could do the job, but I'm not sure if it's possbile to stream from web browser to django backend.
I don't want to use neither node.js nor java's apllets.
I will appreciate any help with js as well as with receiving voice stream in django. Thank you!

There are two separate parts here to consider: signaling and media.
The signaling part (as well as the application logic) can be handled by django. The media part can't.
In order to handle the media part you will need to use a media server that receives and processes that data - the low level media processing parts are usually implemenetd in C/C++. See http://kurento.org for a media server framework that can fit your needs (though it isn't written in Python).

Related

read ICY meta data reactJS

Hi I am wondering how in javascript or reactjs would I read data from a streaming station?
I have googled sadly I have had no luck and I was wondering if anyone knows of a script that can read (icecast ICY metadata?)
Please note that web browsers don't support ICY metadata, so you'd have to implement quite a few things manually and consume the whole stream just for the metadata. I do NOT recommend this.
As you indicate Icecast, the recommended way to get metadata is by querying the JSON endpoint: /status-json.xsl. It's documented.
It sounds like you are custom building for a certain server, so this should be a good approach. Note that you must be running a recent Icecast version (at the very least 2.4.1, but for security reasons better latest).
If you are wondering about accessing random Icecast servers where you have no control over, it becomes complicated: https://stackoverflow.com/a/57353140/2648865
If you want to play a stream and then display it's ICY metadata, look at miknik's answer. (It applies to legacy ICY streams, won't work with WebM or Ogg encapsulated Opus, Vorbis, etc)
I wrote a script that does exactly this.
It implements a service worker and uses the Fetch API and the Readable Streams API to intercept network requests from your page to your streaming server, add the necessary header to the request to initiate in-stream metadata from your streaming server and then extract the metadata from the response while playing the mp3 via the audio element on your page.
Due to restrictions on service workers and the Fetch API my script will only work if your site is served over SSL and your streaming server and website are on the same domain.
You can find the code on Github and a very basic demo of it in action here (open the console window to view the data being passed from the service worker)
I don't know much about stream's but I've found some stuff googling lol
https://www.npmjs.com/package/icy-metadata
https://living-sun.com/es/audio/85978-how-do-i-obtain-shoutcast-ldquonow-playingrdquo-metadata-from-the-stream-audio-stream-metadata-shoutcast-internet-radio.html
also this
Developing the client for the icecast server
its for php but maybe you can translate it to JS.

How does Youtube/Facebook live stream from web browser works

I'm looking at a way to implement video encoder using web browser. Youtube and Facebook already allow you to go live directly from the web browser. I'm wondering how do they do that?
There are a couple of solutions I've researched:
Using web socket: using web browser to encode the video (using mediarecorder api) and push the encoded video to the server to be broadcast.
Using WebRTC: web browser as a WebRTC peer and another server as the other end to receive the stream and re-broadcast (transcode) using other means (rtmp, hls).
Is there any other tech to implement this that those guys (YouTube, Facebook) are using? Or they also use one of these things?
Thanks
WebRTCHacks has a "how does youtube use webrtc" post here which examines some of the technical details of their implementation.
In addition one of their engineers gave a Talk at WebRTC Boston describing the system which is available on Youtube
Correct, you've hit on two ways to do this. (Note that for the MediaRecorder method, you can use any other method to get the data to the server. Web Sockets is one way... so is a regular HTTP PUT of segments. Or, you could even use a data channel of a WebRTC connection to the server.)
Pretty much everyone uses the WebRTC method, as there are some nice built-in benefits:
Low latency (at the cost of some quality)
Dynamic bitrate
Well-optimized on the client
Able to automatically scale output if there are not enough system resources to continue encoding at a higher frame size
The downsides of the WebRTC method:
Ridiculously complicated stack to maintain server-side.
Lower quality (due to emphasis on low latency, BUT you can tweak this by fiddling with the SDP yourself)
If you go the WebRTC route, consider gstreamer. If you want to go the Web Socket route, I've written a proxy to receive the data and send it off to FFmpeg to be copied over to RTMP. You can find it here: https://github.com/fbsamples/Canvas-Streaming-Example

How can I send an audio stream from webpage to a C++ server?

I'm working on a project and I need to send an audio stream from a webpage (through javascript) to a server written in C++. Is this possible? How can I do this? I was thinking on use WebRTC and a WebRTC library for C++ but I don't really know hoy to achieve this.
In general I need some king of webserver in C++, that allows me to send/recieve audio stream and json and works with multiple web clients.
I have worked with Socket.io and once I coded a webserver in Java EE 7, with those I was able to send/recieve json from the webpage but I don't really know if I can send audio stream via websocket or json.
The question (or implementation in answer to the question) really consists of two parts, which are:
How to send audio stream from browser in Javascript
How to receive audio stream on server in C/C++
This is because sending data over the network only loosely couples the client and the server when they use the same protocol. You could write a server in C++, then write two different clients that communicate with it, one in Javascript, then also a desktop app written in Java.
Javascript on Client Side
For the client side, sending audio from the browser in Javascript should follow the normal libraries available for WebRTC; the WebRTC site has some useful information on this, including a video streaming example here (https://webrtc.github.io/samples/)
Some of the links which might be of interest on that page:
Audio-only getUserMedia() output to local audio element
Stream from a video element to a video element
There are some StackOverflow answers already about WebRTC and audio in javascript, here are a couple, these (and libraries) will be more plentiful than C++ questions on the topic:
Sending video and audio stream to server
Sending a MediaStream to host Server with WebRTC after it is captured by getUserMedia
For the C++ Server:
The WebRTC site has a link to the Native API for the libraries here (https://webrtc.org/native-code/native-apis/) and an excellent simple example of a peer connection WebRTC server using them is here (https://webrtc.googlesource.com/src/+/master/examples/peerconnection). It also has an implementation of a C++ client there, which may help in testing the server to get it working first, or see the general principles.

peer to peer communication between mobile app and pc browser

I am working on a project where i need my mobile application to talk to my web browser on a pc, where both devices are connected over wifi. The app would send data which would be received by the computer browser followed by some client side code execution. The browser then may send some feedback.
My initial approach is to make the app talk to an endpoint which in turn talks to client side of the browser (javascript).
What could be the best approach to do this ?
Update
I am not sure if Socket.io is a possible solution since it requires a server to be hosted. Is it possible to solve this using sockets ?
You've now edited your question to mention P2P. That's quite hard to achieve PHONE TO BROWSER (i.e., by hard I mean 6 to 12 man-months of work - and/or plain not possible). However in MOST situations you can instantly (ie "one line of code on each platform") resolve the problem by using a service like pubnub. Much as nobody has back-ends anymore and everything is just done with parse.com or game center, networking like you mention is now just done with pubunb (or any competitor).
This is an extremely common use case problem - and everyone just uses PubNub as mentioned below or one of its competitors.
These days it couldn't be easier, just use pubnub.com
It's the world's biggest data-messaging service for a reason!
There's essentially no other realistic approach, it's so simple - a few lines of code.
So short answer would be: A real peer-to-peer (P2P) communication is currently not possible with all browsers. So instead you have the following options:
App + Server with a WebUI (maybe)
App + Chrome App (Chrome Apps can start an web server, see http://www.devworx.in/news/misc/chrome-apps-can-now-run-a-web-server-135711.html)
App + WebApp with Plugin (Flash, Silverlight or Java)
I personally would prefer solution 1.
You need a server. If you consider this problem strictly from the typical firewall point of view, a PC or a mobile device are going to ignore connections unless they initiate the connection themselves. So neither the PC nor the mobile device can start a connection with the other.
My understanding is that web browsers do not support standard sockets within javascript. You can use the analagous websocket, but sockets and websockets are not directly compatible.
You can setup a simple server on the PC, and have this server relay messages between the mobile device and the PC browser. Both the mobile device and the PC browser connect to the server. This is basically what an external service will do for you.
PeerJS is what you're looking for:
http://peerjs.com

Is Node.Js compatible with iPhone Objective C development for apps?

I am a designer interested in making a shift to iPhone App Development. I am looking to spend the weeks after my exams studying how backend and frontend iPhone development works.
If I want to ultimately build an app which requires a frontend work in ObjectiveC/Xcode environment will I be able to use Node.js for example to compile user data and databases for backend data?
As I understand it (please correct me if I am wrong) to do an app which connects to servers for data requests you need a backend development. I have been reading about node.js and it seems very fast and its javascript which I like.
What would be the easiest combo to get into. I really am not technical and want to limit the pain for compatibility issues.
Will Objective C and Node.js be compatible?
Do you have any outside recommendations with experience you like to share?
Thank you
At a high level, any web server that has the ability to accept http requests and respond with some content (JSON, XML, HTML, string...) will work, you just have to use the correct methods for submitting the request and parsing the response.
Personally, I've been using node.js for an API that I created and host it on AWS. It's lightning fast and I've had no issues. As with most programming languages today, objective-c has libraries that allow you to submit http requests and parse JSON responses.
Node.js supply web interface just like any other backend stack you can choose, so it is suitable for iPhone backend development.
You can also skip the entire backend development, hosting solution and DNS boilerplate if you wish, by using backend as a service solutions like Parse or Stackmob

Categories

Resources