I'm trying to capture audio output from the browser and save a recorded file in JavaScript (without using 3rd party apps or browser extensions).
After reviewing the examples at WebRTC samples this task seems to be relatively straightforward when capturing audio from a user's microphone using the MediaStream output of getUserMedia().
Is there a way to capture a MediaStream that is just the browser's audio output? Or is there some better way to access the browser's audio output in a way that can be recorded to a file?
For context, my audio output in the browser may originate from one of several audio libraries (Tone.js, for example) so I'd rather not rely on generating the audio file from the JS library that is generating the audio. I've looked into writing a file from the AudioContext, but I am trying to find some solution that would be audio-source-agnostic.
Related
I have a program that plays songs from the server. To make it more efficient i split the audio file on server into segments and the send them to the client using Ajax as base64 encoded. The HTML5 native audio player plays the base64 audio segment but when playing the next audio segment, it pauses a little and then plays. The retrieved segments are stored in IndexedDB for quick access but still it causes a pause in the playback. How to make the program more efficient as well as fix the audio pause happening between switching segments.
Is there any other way of appending audio file to a currently playing audio source without any pause using Javascript?
The Media Source Extensions API can do that, but note that you are just reinventing Range requests and caching, which are exactly what browsers already do for fetching media, but they do it better since they don't add the overhead of base64 on it.
So the "other way" is to configure your server to accept Range requests, to serve your file the most basically as possible in a single file, and to let the browser do its job.
I"m trying to use the chrome API https://developers.chrome.com/extensions/tabCapture.
How do I get a audio file out of it. For example, if I'm watching youtube and want to export whatever song I'm doing into an audio file MP3, wav, etc. I know there are some chrome extensions out there that does this but I want to know if there is an API.
Thanks!
Use the desktopCapture API with DesktopCaptureSourceType "audio"
Saving the stream may be more challenging. You could create a Web Audio API Audio context and call decodeAudioData() to get raw PCM data (e.g. .wav file). MP3 encoding you might be able to find some kind of Emscripten module for.
Alternatively, it looks like there is a media recorder API (demo) I'm not sure if this is stable in many browsers yet, but I just tried in Chrome and it seemed to work.
I'm reading a book about Web Audio API.
In the book it states that to play and load a sound using the WEB AUDIO API, there are 4 steps that needs to be taken:
1.) Load the sound file with XHR and decode it. (Will end up with a 'buffer')
2.) Connect the buffer to audio effects nodes.
3.) To hear the sound, connect the last node in the effects chain to the destination.
4.) Start the sound.
My question is...given these 4 steps, is there a way for the user of the website that uses the Web Audio to download the audio/audios played on the website???
If so, how does one prevent this.
or does it being 'buffered' prevent it from being illegally downloaded?
I would like to find a way to protect the audio files I use inside my game/app that I put up on the webpage that are played with the Web Audio API.....
Thank you....
EASILY save it, no. But 1) if it's being transferred as an MP3, etc file the user can go into their network cache and copy it; there's no inherent DRM or anything. 2) Even if the sound was being generated completely from scratch (e.g. mathematically) the user could use a virtual audio device like Soundflower to save the output.
So no, it's not really possible to prevent the user from saving audio files.
Basically trying to play some live audio streams in an app I'm porting to the browser.
Stream example: http://kzzp-fm.akacast.akamaistream.net/7/877/19757/v1/auth.akacast.akamaistream.net/kzzp-fm/
I have tried HTML5 audio tag and jPlayer with no luck. I know next to nothing about streaming audio, however, when I examine the HTTP response header the specified content type is "audio/aacp" (not sure if that helps).
I'm hoping someone with more knowledge of audio formats could point me in the right direction here.
The problem isn't with AAC+ being playable, the issue is with decoding the streaming ACC wrapper called ADTS. The Audio Data Transport Stream [pdf] or "MP4-contained AAC streamed over HTTP using the SHOUTcast protocol" can be decoded and therefore played by only a couple media players (e.g., foobar2000, Winamp, and VLC).
I had the same issue while trying to work with the SHOUTcast API to get HTML5 Audio playback for all the listed stations. Unfortunately it doesn't look like there's anything that can be done from our perspective, only the browser vendors can decide to add support for ADTS decoding. It is a documented issue in Chrome/WebKit. There are 60+ people (including myself) following the issue, which is marked as "WontFix".
I am going to develop a chat based application for mobile which allows video chat. I am using HTML5, javascript and PhoneGap. Using phoneGap, I am able to access mobile camera, capture a video, save the video and upload it in server. I have done it for android. But I need live broadcasting of the video. Is there any solution of that?
Note: It is not any android native app.
You didn't specify what facility you're currently using for the video capture. AFAIK, current WebView doesn't yet support WebRTC which is the w3 standard that will soon enable you to access the video frames in your HTML5 code. So I'm assuming you're using PhoneGap's navigator.device.capture.captureVideo facility.
On Android, captureVideo creates 3gp files. The problem with 3gp is that they cannot be streamed or played while capturing: the MOOV atom of the file is required for parsing the video frames in it, and it is written only after all frames in the file have been encoded. So you must stop the recording before you can make any use of the file.
Your best shot in HTML5 is to implement a loop that captures a short clip (3-5 seconds?) of video, then sends it to a server while the next chunk is being captured. The server will need to concatenate the clips to a single file that can be broadcast with a streaming server. This will add several seconds to the latency of the broadcast, and you are quite likely to suffer from lost frames at the point in the gap between two separate chunk captures. That might be sufficient for some use cases (security cameras, for example).
If your application is such that you cannot afford to lose frames, I see no other option but to implement the video capture and streaming in Java, as a PhoneGap Plugin.
See Spydroid http://code.google.com/p/spydroid-ipcamera/
It uses the solution with the special FileDescriptor you found. Basically they let the video encoder write a .mp4 with H.264 to the special file descriptor that calls your code on write. Then they strip off the MP4 header and turn the H.264 NALUs into RTP packets.