I would like to use my computer's audio as a source for an online music streaming project I'm working on. How would I go about, via JavaScript, accessing my computer's audio output/speakers?
You could setup your computer to output its audio to its microphone input and then use the getUserMedia API to capture the audio from your "microphone".
Related
How can i allow multiple users to stream audio coming from their machine over the network for multiple listeners i mean taking all the sound from their soundcard to the network,
I know this can be accomplish using icecast, edcast etc. but that will be only when the user installs these program to their device and start making configurations and a lot of work.
what i need is if its possible to do this without icecast just javascript, if we use webrtc it will be more like a voice call i guess,
but i need that audio streamed from device A to device B as if it was already in device B, am talking about playing music on the device and sound from mic at same time. is this possible with javascript? and multiple users can do this stream at same time.
taking all the sound from their soundcard to the network [...] without icecast just javascript
There is no Web API for capturing all sound the computer plays that I know of.
You could maybe make something like this with WebRTC or other web APIs if the user's sound drivers expose a recording device (like the "Stereo Mix" recording device of olden days) that the user selects for your web app to use.
(As an aside, Icecast itself doesn't care where the audio comes from, it just accepts and redistributes OGG or MP3 streams. It's up to the casting client to figure out where the audio comes from.)
I am on my way of learning javascript by developing some cool stuff like video conferencing app. I don't have much understanding of this webRTC technology so I am wondering if it is also possible to save video call to the server which has been taken place on webRTC based video conferencing app ?
Yes, it is possible. You can record each stream to blobs and push them to your server with websocket. You can then convert the blobs to a webm file.
Demo: video https://webrtc.github.io/samples/src/content/getusermedia/record/
I am trying to setup live internet radio station using icecast server and want my stream to work in all modern browsers. My client only produces .ogg stream. All browsers doesn't play .ogg stream. For eg the .ogg stream I have setup works in chrome but doesn't work in IE. How should I make it run in all browsers?
Is there a way I can convert .ogg stream to .mp3 or any other format on the fly
Embed a audio player in the browser which can play .ogg stream.
Or Any other advice would be helpful.
Regards,
Hitesh Bhutani
You have several options:
Change encoding format from OGG to MP3 in your Virtual DJ software. Keep in mind that Firefox will not be able to play mp3 streams on some platforms using HTML5 audio tag due to licensing limitations.
Install some kind of transcoding software on your server (where you have Icecast installed and running), for example liquidosap (https://www.liquidsoap.info/). Liquidsoap can (among other things) take your stream as an input and transcode it to sereveral formats, for example - mp3, aac, ogg and then your Icecast server will have several mount points available, for example http://yourserver.com:8000/stream.mp3, http://yourserver.com:8000/stream.ogg, http://yourserver.com:8000/stream.aac and then you can create a small javascript that wil detect browser version and choose suitable stream.
Use HTML5 media player like jPlayer (http://jplayer.org/) or Soundmanager2 (http://www.schillmania.com/projects/soundmanager2/). These players can automatically detect browser version and select suitable stream type, also if they can't play the stream using HTML5 <audio> tag, they will fall back to internal Flash based player.
The most advanced way is to combine (2) and (3) methods, that will give you the most browser support.
Supported audio coding formats
I'm struggling trying to figure out how to implement live streaming of an audio file through Azure Media Services. What I'm trying to do is have a single/static audio file that live streams and repeats when it reaches the end of the file.
The thought is to have a radio station type experience that when the user starts listening to the audio it begins playing at where the file is currently at in the live stream.
I have very limited knowledge with codecs, streaming types, and encoding. That said, I was able to upload my mp3 file to Azure Media Services, encode it using "AAC Good Quality Audio" and am able to play the audio clip. However, I want to enable streaming to ensure the experience I described above.
The last piece of this will be enabled through a responsive website so I would like to enable the stream using HTML 5 so it's playable on all devices that support it (desktop, mobile, tablet, etc). Is there a HTML5/JavaScript player that is able to do this? Flash/Silverlight is not an option since this won't render on mobile nor tablets.
If I can provide any further information please let me know. Most/all of the articles I see about live streaming is about video and I'm struggling to find how to do this with audio. Any help would be greatly appreciated.
Thanks!
There is no such thing as Live Streaming a single audio file. Live streaming as such imposes a live event. Something that is happening and while happening is streaming. And it doesn't matter if it is Audio or Audio + Video or only Video.
Using only Azure Media Services you cannot achieve this goal. You need a process that plays in repeative mode the media and streams it to a live streaming channel of Azure Media Services.
But it would be a rather expensive exercise! For your need, a more cost effective way would be use some linux streaming server on a Linux VM, like http://icecast.org/
You can also send one file, and Azure can transcode it to various formats for you. I have a full list of tutorials I wrote on Azure Media Services:
Intro to HTML5 video
Intro to Azure Media services, AES, and
PlayReady DRM
Live streaming HTML5 video using Azure Media
Services
Using Azure Blob Storage to store & serve your audio and
video files
Use this Azure Media Player for streaming Media Service
video to all devices
Uploading video to Azure Media Services
In terms of the encoding, Azure can do that for you. Give it one file type, and it can create multiple copies of various formats for you. I have a short post and a 10 minute video on how to do that.
There is also an Azure Media Player, and the selling point behind this is that it adapts the video stream based on the device it detects the player is running on, which is nice for pairing with the format changes listed above. So it saves you the trouble of having to write the fallback conditions yourself. (Ex: Running on an iOS device, so use HLS).
You can use any of this for audio, as well. Your best bet is to set set Looping = true on the video player, as both the HTML5 audio and video player has this attribute.
Let me know if you need more info.
I want to record the videos through my webcam and want to upload them to the server. I don't want to use any plugin. How can I atleast just record the videos.
There's a project on GitHub called RecordRTC. It also provides a live demo here. This tool could run on browsers supporting WebRTC and getUserMedia. It could record both audio and video.
However, according to data from Can I Use, currently WebRTC and getUserMedia are only supported by Firefox, Chrome, and other blink engine browsers.
Use the navigator.getUserMedia function.
Check out these url`s.
Capturing Audio & Video
Have a look here
That being said, this question has been asked several times before:
How to record webcam and audio using webRTC and a server-based Peer connection
Access webcam without Flash
HTML 5 streaming webcam video?