I have seen a lot of questions and articles about getting user's microphone input, but what I want is actually the opposite.
Is it possible to send a sound through the microphone as if the user himself had spoken? It would be something like soundpad, using JS.
Here's an idea:
When the user wants a MediaStream with the microphone audio, they make a call to navigator.getUserMedia({video:false, audio:true});. We can redefine navigator.getUserMedia to our own function (keeping the original in a separate global variable so we can still get the microphone data) that will return a MediaStream which plays audio from a file. We can even return a combined MediaStream that will combine audio from the microphone and a file using the Web Audio API to do the combining.
I've been trying to do this with video so I can replace my video in Google Meet, but Google Meet seems to automatically do things to the MediaStream (like muting and pausing) that I haven't handled, so that project doesn't work yet. Google Meet is very secure, so that might be the problem, but I think this trick might work for you!
Related
I set up a pretty simple audio call test utilizing WebRTC based off of another one of my projects, a video chat (also using WebRTC). I thought it would be easy, but once I got it set up, the audio isn't played by the user. That means that both peers receive the respective offer/answer SDP WebSocket event, and the SDP is present, but I cannot hear my voice echo back at me when I talk or make any noise. Their is nothing in my console (I catch all errors, too).
Is their a cause for this?
I based my code off of Amir Sanni's video chat located here. I basically just used getUserAudio instead of media stream, and deleted the lines where it added a video.
You will need to play the audio back somehow. I would recommend using audio tags instead of video ones and hiding them with display: none.
I'm reading a book about Web Audio API.
In the book it states that to play and load a sound using the WEB AUDIO API, there are 4 steps that needs to be taken:
1.) Load the sound file with XHR and decode it. (Will end up with a 'buffer')
2.) Connect the buffer to audio effects nodes.
3.) To hear the sound, connect the last node in the effects chain to the destination.
4.) Start the sound.
My question is...given these 4 steps, is there a way for the user of the website that uses the Web Audio to download the audio/audios played on the website???
If so, how does one prevent this.
or does it being 'buffered' prevent it from being illegally downloaded?
I would like to find a way to protect the audio files I use inside my game/app that I put up on the webpage that are played with the Web Audio API.....
Thank you....
EASILY save it, no. But 1) if it's being transferred as an MP3, etc file the user can go into their network cache and copy it; there's no inherent DRM or anything. 2) Even if the sound was being generated completely from scratch (e.g. mathematically) the user could use a virtual audio device like Soundflower to save the output.
So no, it's not really possible to prevent the user from saving audio files.
I have been searching for something in chrome extension reference to find anything that would allow me to manipulate audio level of a tab. Only option that has come to my mind is make script have it go through all elements in page and either remove them or mute them if possible.
But i feel there has to be a way to reroute all audio streams to nothing, like break them from output which is speakers if using audio api of html5...however no avail either with chrome extension apis or web audio api.
Goal: mute all sounds on page (flash, audio element, etc.)
You cannot do this now, although this will hopefully change in the near-term future.
At the moment, there is nothing in the Chrome APIs, although I did propose a tabaudio API back in February (and am working on a new draft -- as well as an implementation -- right now.)
Can you give me an idea as to what you want this functionality for? (They ask for potential uses when proposing APIs.)
Perhaps the closest that you can do is something similar to what the MuteTab Chrome extension does (written by me, http://www.github.com/jaredsohn/mutetab), which basically scans the page for object, embed, audio, video, and applet tags and hides them from the page. Unfortunately, this misses web audio. Also, instead of muting, it "stops" it by removing it from the page, which could block the video or game associated with the sound. Alternatively, if you just care about HTML5 video or audio or Flash that has an API (such as YouTube), you could could use JavaScript to pause or mute things.
There's now a Chrome extension allowing to mute websites by URL using blacklist/whitelist approach called "Mute Tabs by URL".
It does require you to allow it to read your 'browsing history', but description swears that it doesn't store your URLs anywhere, and event points to a location of source code, so you can verify it for yourself
I'm currently experiencing a problem that a client who has audio but no video can't receive the remote clients video (even though the remote client is capturing both audio and video).
Video and audio constraints are set to true on both clients. The application runs correctly if both clients have audio and video.
Does anyone know a solution for this?
Simply make sure that the client who has audio/video MUST create offer; and other client should create answer. Then it will be oneway streaming; and it will work!
userWhoHasMedia.createOffer(sdp_sucess_callback, sdp_failure_callback, sdp_constraints);
userWhoDontHavemedia.createAnswer(sdp_sucess_callback, sdp_failure_callback, sdp_constraints);
Also, if you want, you can set "OfferToReceiveAudio" and "offerToReceiveVideo" to false for client who doesn't captures media. Though, it is useless in your case, because non-Media client is receiver.
Basically trying to play some live audio streams in an app I'm porting to the browser.
Stream example: http://kzzp-fm.akacast.akamaistream.net/7/877/19757/v1/auth.akacast.akamaistream.net/kzzp-fm/
I have tried HTML5 audio tag and jPlayer with no luck. I know next to nothing about streaming audio, however, when I examine the HTTP response header the specified content type is "audio/aacp" (not sure if that helps).
I'm hoping someone with more knowledge of audio formats could point me in the right direction here.
The problem isn't with AAC+ being playable, the issue is with decoding the streaming ACC wrapper called ADTS. The Audio Data Transport Stream [pdf] or "MP4-contained AAC streamed over HTTP using the SHOUTcast protocol" can be decoded and therefore played by only a couple media players (e.g., foobar2000, Winamp, and VLC).
I had the same issue while trying to work with the SHOUTcast API to get HTML5 Audio playback for all the listed stations. Unfortunately it doesn't look like there's anything that can be done from our perspective, only the browser vendors can decide to add support for ADTS decoding. It is a documented issue in Chrome/WebKit. There are 60+ people (including myself) following the issue, which is marked as "WontFix".