I am new in webRTC . As I know WebRTC is use for real time communication . In spec it seems that Stream can be created only by device outout( using GetUserMedia for microphone , camera or chrome tab capture api ) .But in my application i am getting real time Uint8DVideo( eg H264) data . Can i convert this uint8Data to MediaStream ?
I assume you don't use getUserMedia, but some arbitrary source.
Getting this video "buffer" to be displayed is tricky and not possible in every browser (only Chrome and soon Firefox). You don't need WebRTC to do that, but something called Media Source API AKA MSE (E for extensions).
The API is rather picky on it's accepted byte streams, and will not get any "video data". For H264, it will only except fragmented MP4. more info about that here.
Related
I am currently experimenting with the getDisplayMedia browser api: https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia
I am successfully able to create a MediaStream video that captures an application, but it fails at capturing the audio (say for example I'm capturing a VLC video Running, there will be no audio track ever in the MediaStream).
I have tested that with both Chrome (83+) anf Firefox on Linux Mint and Windows, and it seems to fail everytime.
Also, when I'm trying to record audio only in Chrome (firefox not tested), it throws
TypeError: Failed to execute 'getDisplayMedia' on 'MediaDevices': Audio only requests are not supported
I have read multiple dev threads, and it seems that it is not mandatory that the created MediaStream will contain audio, even though it is asked in the options.
So my questions are:
is the audio option just complete decoration ? Because it seems
pretty useless
is there another way to capture audio ? for example with the Web Audio API ? I couldn't find any resource on that. Keep in mind that I want
to capture system or window audio, not microphone.
Thanks for your answers
EDIT: Alright, I found that there's one possibility to capture audio: it has to be with Chrome, on Windows, and capture a screen (not an app). According to the doc, it should work with Edge too. I wonder if there are other means to capture the audio without some loopback device.
I'm transferring a live audio stream between 2 Electron window processes using WebRTC. There are no ICE or STUN servers, or anything like that, the connection is established manually through Electron IPC communication (based on this code).
Note: from the technical point of view regarding the audio streams themselves, this is very similar (if not identical) to streaming between 2 browser tabs on the same domain, so this is primarily not a question regarding Electron itself, although Electron IPC would be obviously substituted with a browser equivalent.
The audio stream works, I can transmit audio from one window to another in real-time, as it is generated. That is, I can generate audio (Web Audio API) in window "A" and listen to it through an <audio> element in window "B", or do processing on it using a separate AudioContext in window "B" (although there is some latency).
However, the audio data is vastly altered during streaming: it became mono, its quality dropped, and there is significant latency. After fiddling around I've learned WebRTC does pretty much everything I don't need, including encoding the audio stream with an audio codec, encrypting the transfer, running echo cancellation, and so on.
I don't need these. I need to simply transfer raw audio data through local WebRTC without altering the audio in any way. It needs to be float32 accurate to the sample.
How can I do this with WebRTC?
Why use WebRTC then?
I need to do custom audio processing inside the Web Audio API.
The only way to do this is using a ScriptProcessorNode, which is unusuable in production code when there's essentially anything on the page, because it is broken by design (it processes audio on the UI thread, and causes audio glitching by even slight UI interactions).
So basically, because of this (and to the best of my knowledge), my only option is to transfer audio with WebRTC streams to another window process, perform ScriptProcessorNode processing there (nothing more is happening in that window, empty DOM, so the processing is always nice and smooth), then send the results back.
This works, but the audio is altered during streaming, which I want to avoid (see above).
Why not use AudioWorklet?
Because Electron is 5 versions behind Chrome unfortunately (version 59 at the moment), and simply does not ship AudioWorklet yet.
i need to send a live streaming from pc to pc , both of them using just the web browser (IE, firefox o chrome), exist a library (javascript) that could help me to push the stream from the sender to the media server (ffmpeg-ffserver, wowza, etc).
I guess you want to stream a video signal from the webcam. Then the way to go is to use webRTC, but it is still very new (wowza server just started to support it) and it is only supported in some modern browsers. So you will encounter many issues.
Most of the existing solution still use flash to capture from the webcam and encode in rtmp.
I need to obtain frequency/pitch data from the microphone of an android device on the fly using JavaScript.
I have done this for desktop/laptop browsers with getUserMedia and Web Audio API, but these are not supported on the vast majority of Android devices.
I have tried using the cordova-plugin-media-capture however this opens an audio recorder which the user can then save or discard, and after saving you can use cordova-plugin-file to obtain the data as shown here: https://stackoverflow.com/a/32097634/5674976 but I need it not to open the audio recorder, and instead perhaps just a record button, and once it is recording to provide the audio data immediately (so that it can detect the frequency data in real-time).
I have seen recording functionality in place e.g. WhatsApp, Facebook Messenger etc. and so as a last resort - since I do not know Java - would it be possible to create a plugin using Java for Cordova?
edit: I have also looked at cordova-plugin-media https://github.com/apache/cordova-plugin-media which seems to provide amplitude data and current position data. I'm thinking I could figure out frequency by looking at the amplitude over time, or am I being naive?
I managed to record audio and also analyze the frequency without either getUserMedia or Web Audio API for Android.
Firstly I installed the cordova-plugin-audioinput plugin, which outputs a stream of audio samples (from the microphone), with custom configurations such as buffer size and sample rate. You can then use this data to detect specific frequencies.
I'm using the SoundCloud public API for playing audio in a browser from the SC servers with the JavaScript SDK 3.0.0. After initialization, I managed to get a JSON with a specific track's stream URLs with the SC.Stream method.
{
"http_mp3_128_url":"https://cf-media.sndcdn.com/a6QC6Zg3YpKz.128.mp3...” ,
"hls_mp3_128_url":"htt...//ec-hls-media.soundcloud.com/playlist/a6QC6Zg3YpKz.128.mp3/...” ,
"rtmp_mp3_128_url":"rtmp://ec-rtmp-media.soundcloud.com/mp3:a6QC6Zg3YpKz.128?...",
"preview_mp3_128_url":"htt....../ec-preview-media.sndcdn.com/preview/0/90/a6QC6Zg3YpKz.128.mp3?..."
}
In it, there is an HTTP, an HLS and an RTMP URL. I can handle the HTTP, but I can't get the RTMP working. Does anyone know how is it decided which stream will be played? And how can I manipulate this? Or how can I access the RTMP stream?
A few weeks ago I checked with WireShark that SoundCloud delivered via RTMP, but now I can't seem to capture any RTMP streams, and I don't know how to search for one.
Usually RTMP stream is used from Flash Media Server, Wowza Media Server and Red5 server.
You can play that type of stream using a flash object in your web page like:
enter link description here
Or for application - you can play with ffplay and convert to other type of stream with ffmpeg
I've been working on the same thing. It plays using the HTTP protocol in Dev mode and then reverts to attempting to use the RTMP protocol in normal browsing mode (at least in chrome anyway). Here's how I solved the issue..
When you use the sc.stream request it will return the object to play. You can edit this object before it gets sent to the player.
For example:
SC.stream('/tracks/'+playr.currentTrack.id).then(function (x) {
x.options.protocols=["http"];
x.play();}
Setting the protocol object parameter as above forces it to use the correct protocol, if you console log it first by trying to play the track in non-dev mode you'll see it also contains the ["rtmp"] protocol, and then fails to play in chrome.