I have so far managed to run the following sample:
WebRTC native c++ to browser video streaming example
The sample shows how to stream video from a native C++ application (peerconnection_client.exe) to the browser (I am using Chrome). This works fine and I can see myself in the browser.
What I would like to do is to stream audio from the browser to the native application but I am not sure how. Can anyone give me some pointers please?
I'm trying to find a way to stream both video and audio from browser to my native program. and here is my way so far.
To stream video from browser to your native program without gui, just follow the example here. https://chromium.googlesource.com/external/webrtc/+/refs/heads/master/examples/peerconnection/client/
use AddOrUpdateSink to add your own VideoSinkInterface and you will receive your frame data in callback void OnFrame(const cricket::VideoFrame& frame). Instead of render the frame to GUI as the example does, you can do whatever you want.
To stream audio from browser to your native program without real audio device. you can use a fake audio device.
modify variable rtc_use_dummy_audio_file_devices to true in file https://chromium.googlesource.com/external/webrtc/+/master/webrtc/build/webrtc.gni
invoke the global static function to specify the filename webrtc::FileAudioDeviceFactory::SetFilenamesToUse("", "file_to_save_audio");
patch file_audio_device.cc with the code blew. (as I write this answer, FileAudioDevice has some issues, may already be fixed)
recompile your program, touch file_to_save_audio and you will see pcm data in file_to_save_audio after webrtc connection is established.
patch:
diff --git a/webrtc/modules/audio_device/dummy/file_audio_device.cc b/webrtc/modules/audio_device/dummy/file_audio_device.cc
index 8b3fa5e..2717cda 100644
--- a/webrtc/modules/audio_device/dummy/file_audio_device.cc
+++ b/webrtc/modules/audio_device/dummy/file_audio_device.cc
## -35,6 +35,7 ## FileAudioDevice::FileAudioDevice(const int32_t id,
_recordingBufferSizeIn10MS(0),
_recordingFramesIn10MS(0),
_playoutFramesIn10MS(0),
+ _initialized(false),
_playing(false),
_recording(false),
_lastCallPlayoutMillis(0),
## -135,12 +136,13 ## int32_t FileAudioDevice::InitPlayout() {
// Update webrtc audio buffer with the selected parameters
_ptrAudioBuffer->SetPlayoutSampleRate(kPlayoutFixedSampleRate);
_ptrAudioBuffer->SetPlayoutChannels(kPlayoutNumChannels);
+ _initialized = true;
}
return 0;
}
bool FileAudioDevice::PlayoutIsInitialized() const {
- return true;
+ return _initialized;
}
int32_t FileAudioDevice::RecordingIsAvailable(bool& available) {
## -236,7 +238,7 ## int32_t FileAudioDevice::StopPlayout() {
}
bool FileAudioDevice::Playing() const {
- return true;
+ return _playing;
}
int32_t FileAudioDevice::StartRecording() {
diff --git a/webrtc/modules/audio_device/dummy/file_audio_device.h b/webrtc/modules/audio_device/dummy/file_audio_device.h
index a69b47e..3f3c841 100644
--- a/webrtc/modules/audio_device/dummy/file_audio_device.h
+++ b/webrtc/modules/audio_device/dummy/file_audio_device.h
## -185,6 +185,7 ## class FileAudioDevice : public AudioDeviceGeneric {
std::unique_ptr<rtc::PlatformThread> _ptrThreadRec;
std::unique_ptr<rtc::PlatformThread> _ptrThreadPlay;
+ bool _initialized;;
bool _playing;
bool _recording;
uint64_t _lastCallPlayoutMillis;
I know this is an old question, but I struggled myself to find a solution currently so I thought sharing is appreciated.
There's is one more or less simple way to get an example running which streams from the browser to native code.You need the webrtc source http://www.webrtc.org/native-code/development
The two tools you need are the peerconnection server and client. Both can be found in the folder talk/example/peerconnection
To get it working you need to patch it to enable DTLS for the peerconnection client. So patch it with the patch provided here https://code.google.com/p/webrtc/issues/detail?id=3872 and rebuild the client. Now you are set up on the native site!
For the browser I recommend the peer2peer example from here https://github.com/GoogleChrome/webrtc after starting the peerconnection_server and connection the peerconnection_client try to connect with the peer2peer example.
Maybe a connection constraint is necessary:
{
"DtlsSrtpKeyAgreement": true
}
you could use the following example which implement a desktop client for appRTC.
https://github.com/TemasysCommunications/appRTCDesk
this completes and interop with the web client, android client and iOs client provided by the open source implementation at webrtc.org, giving you a full suite of clients to work with their free server. peer connection_{client|server} is an old example from the lib jingle time (pre webrtc) and does not interop with anything else.
Related
I currently played around with the Web Audio API a little bit. I managed to "read" a microphone and play it to my speakers which worked quite seamlessly.
Using the Web Audio API, I now would like to resample an incoming audio stream (aka. microphone) from 44.1kHz to 16kHz. 16kHz, because I am using some tools which require 16kHz. Since 44.1kHz divided by 16kHz is not an integer, I believe I cannot just simply use a low-pass filter and "skip samples", right?
I also saw that some people suggested to use the .createScriptProcessor(), but since it is deprecated I feel kind of bad to use it, so I'm searching a different approach now. Also, I don't necessarily need the audioContext.Destination to hear it! It is still fine if I get the "raw" data of the resampled output.
My approaches so far
Creating an AudioContext({sampleRate: 16000}) --> throws an error: "Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported."
Using an OfflineAudioContext --> but it seems to have no option for streams (only for buffers)
Using an AudioWorkletProcessor to resample. In this case, I think, that I could use the processor to actually resample the input and output the "resampled" source. But I couldn't really figure how to resample it.
main.js
...
microphoneGranted: async function(stream){
audioContext = new AudioContext();
var microphone = audioContext.createMediaStreamSource(stream);
await audioContext.audioWorklet.addModule('resample_proc.js');
const resampleNode = new AudioWorkletNode(audioContext, 'resample_proc');
microphone.connect(resampleNode).connect(audioContext.destination);
}
...
resample_proc.js (assuming only one input and output channel)
class ResampleProcesscor extends AudioWorkletProcessor {
...
process(inputs, outputs, parameters) {
const input = inputs[0];
const output = outputs[0];
if(input.length > 0){
const inputChannel0 = input[0];
const outputChannel0 = output[0];
for (let i = 0; i < inputChannel0.length; ++i) {
//do something with resample here?
}
return true;
}
}
}
registerProcessor('resample_proc', ResampleProcesscor);
Thank you!
Your general idea looks good. While I can't provide the code to do the resampling, I can point out that you might want to start with Sample-rate conversion. Method 1 would work here with L/M = 160/441. Designing the filters takes a bit of work but only needs to be done once. You can also search for polyphase filtering for hints on how to do this effectively.
What chrome does in various parts is to use a windowed-sinc function to resample between any set of rates. This is method 2 in the wikipedia link.
The WebAudio API now allows to resample by passing the sample rate in the constructor. This code works in Chrome and Safari:
const audioStream = await navigator.mediaDevices.getUserMedia({ audio: true, video: false })
const audioContext = new AudioContext({ sampleRate: 16000 })
const audioStreamSource = audioContext.createMediaStreamSource(audioStream);
audioStreamSource.connect(audioContext.destination)
But fails in Firefox that throws a NotSupportedError exception with AudioContext.createMediaStreamSource: Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported.
In the example below, I've downsampled the audio coming from the microphone to 8kHz and added a one second delay so we can clearly hear the effect of downsampling:
https://codesandbox.io/s/magical-rain-xr4g80
I am getting this error
System.IO.FileNotFoundException: 'Could not load file or assembly 'CefSharp.Core, Version=63.0.3.0, Culture=neutral, PublicKeyToken=40c4b6fc221f4138'. The system cannot find the file specified.'
I am trying to run the cefsharp.minimalexample.offscreen program in .net core 2.0. in visual studio 2017
what I have done so far
1 . Created .net core console application
2 . Installed NuGet packages Cefsharp.Offscreen (which installs the dependencies cefsharp.common and redist)
3 . Installed Microsoft.windows.compatibility nuget package to get the system.drawing in .net core (It was not working with System.Drawing.Common as the Cefsharp ScreenshotAsync function using system.drawing)
These steps will clear all the errors and the project will build successfully.
I am getting the above mentioned error.
I have checked all the required files mentioned in the Cefsharp documentation in the current running folder (debug). All files are available ,Still error is not going away.
It works fine in old Dot net versions 4.6.
I could not find any helping documents for implementing cefsharp.offscreen with .net core any where.
This is the code from the example provided in the Cefsharp.offscreen.
Please let me know if you can shed some light on this issue. Thanks in advance.
public class Program
{
private static ChromiumWebBrowser browser;
public static void Main(string[] args)
{
const string testUrl = "https://www.google.com/";
Console.WriteLine("This example application will load {0}, take a screenshot, and save it to your desktop.", testUrl);
Console.WriteLine("You may see Chromium debugging output, please wait...");
Console.WriteLine();
var settings = new CefSettings()
{
//By default CefSharp will use an in-memory cache, you need to specify a Cache Folder to persist data
CachePath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "CefSharp\\Cache")
};
//Perform dependency check to make sure all relevant resources are in our output directory.
Cef.Initialize(settings, performDependencyCheck: true, browserProcessHandler: null);
// Create the offscreen Chromium browser.
browser = new ChromiumWebBrowser(testUrl);
// An event that is fired when the first page is finished loading.
// This returns to us from another thread.
browser.LoadingStateChanged += BrowserLoadingStateChanged;
// We have to wait for something, otherwise the process will exit too soon.
Console.ReadKey();
// Clean up Chromium objects. You need to call this in your application otherwise
// you will get a crash when closing.
Cef.Shutdown();
}
private static void BrowserLoadingStateChanged(object sender, LoadingStateChangedEventArgs e)
{
// Check to see if loading is complete - this event is called twice, one when loading starts
// second time when it's finished
// (rather than an iframe within the main frame).
if (!e.IsLoading)
{
// Remove the load event handler, because we only want one snapshot of the initial page.
browser.LoadingStateChanged -= BrowserLoadingStateChanged;
var scriptTask = browser.EvaluateScriptAsync("document.getElementById('lst-ib').value = 'CefSharp Was Here!'");
scriptTask.ContinueWith(t =>
{
//Give the browser a little time to render
Thread.Sleep(500);
// Wait for the screenshot to be taken.
var task = browser.ScreenshotAsync();
task.ContinueWith(x =>
{
// Make a file to save it to (e.g. C:\Users\jan\Desktop\CefSharp screenshot.png)
var screenshotPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "CefSharp screenshot.png");
Console.WriteLine();
Console.WriteLine("Screenshot ready. Saving to {0}", screenshotPath);
// Save the Bitmap to the path.
// The image type is auto-detected via the ".png" extension.
task.Result.Save(screenshotPath);
// We no longer need the Bitmap.
// Dispose it to avoid keeping the memory alive. Especially important in 32-bit applications.
task.Result.Dispose();
Console.WriteLine("Screenshot saved. Launching your default image viewer...");
// Tell Windows to launch the saved image.
Process.Start(screenshotPath);
Console.WriteLine("Image viewer launched. Press any key to exit.");
}, TaskScheduler.Default);
});
}
}
}
I've seen the following:
chrome://webrtc-internals
However I'm looking for a way to let users click a button from within the web app to either download or - preferably - POST WebRtc logs to an endpoint baked into the app. The idea is that I can enable non-technical users to share technical logs with me through the click of a UI button.
How can this be achieved?
Note: This should not be dependent on Chrome; Chromium will also be used as the app will be wrapped up in Electron.
You need to write a javascript equivalent that captures all RTCPeerConnection API calls. rtcstats.js does that but sends all data to a server. If you replace that behaviour with storing it in memory you should be good.
This is what I ended up using (replace knockout with underscore or whatever):
connectionReport.signalingState = connection.signalingState;
connectionReport.stats = [];
connection.getStats(function (stats) {
const reportCollection = stats.result();
ko.utils.arrayForEach(reportCollection, function (innerReport) {
const statReport = {};
statReport.id = innerReport.id;
statReport.type = innerReport.type;
const keys = innerReport.names();
ko.utils.arrayForEach(keys, function (reportKey) {
statReport[reportKey] = innerReport.stat(reportKey);
})
connectionReport.stats.push(statReport);
});
connectionStats.push(connectionReport);
});
UPDATE:
It appears that this getStats mechanism is soon-to-be-deprecated.
Reading through js source of chrome://webrtc-internals, I noticed that the web page is using a method called chrome.send() to send messages like chrome.send('enableEventLogRecordings');, to execute logging commands.
According to here:
chrome.send() is a private function only available to internal chrome
pages.
so the function is sandboxed which makes accessing to it not possible
I am working on a project which provides video calling from web to phone (iOS or Android). I am using QuickBlox + WebRTC to implement video calling. From web I want to pass some additional info along with call request like caller name, etc. I looked into the JavaScript documentation of QuickBlox + WebRTC which suggest to use the following code (JavaScript):
var array = {
me: "Hari Gangadharan",
}
QB.webrtc.call(callee.id, 'video', array);
I have implemented the same code but unable to get the info attached with session request on the receiver side (getting nil reference in iOS method).
- (void)didReceiveNewSession:(QBRTCSession *)session userInfo:(NSDictionary *)userInfo {
//Here userInfo is always nil
}
Please use the following structure
var array = {
"userInfo": {
"me":"Hari Gangadharan",
}
}
because our iOS SDK uses "userInfo" as a key for parsing custom user info
Check Signaling v1.0
I need to call a javascript function inside a chrome-window from a windows service written in c#.
The browser is entirely at my disposal so I can configuration is no problem.
For example, the windows service is a file checker, when a certain file is changed there has to popup a js alert.
-EDIT-
The following works fine for client to client communication (server-side code).
So when a specific event happens on the server I can display this on the client (I hoped commented would do that)
using System;
using System.Threading.Tasks;
using Microsoft.AspNet.SignalR.Client;
using SignalR.Hosting.Self;
using SignalR.Hubs;
namespace Net.SignalR.SelfHost
{
class Program
{
static void Main(string[] args)
{
string url = "http://localhost:8081/";
var server = new Server(url);
server.MapHubs();
server.Start();
Console.WriteLine("SignalR server started at " + url);
Console.ReadLine();
// Clients[collection].flush("bericht: " + message + collection);
}
//public void PushMessage(string bericht)
//{
// var hubConnection = new HubConnection("http://localhost:8081/");
// var serverHub = hubConnection.CreateProxy("CollectionHub");
// serverHub.On("flush", message => System.Console.WriteLine(message));
// hubConnection.Start().Wait();
// serverHub.Invoke("Subscribe", "Product");
// string line = null;
// while ((line = bericht) != null)
// {
// serverHub.Invoke("Publish", line, "Product").Wait();
// }
// System.Console.Read();
//}
public class CollectionHub : Hub
{
public void Subscribe(string collectionName)
{
Groups.Add(Context.ConnectionId, collectionName);
Console.WriteLine("Subscribed to: " + collectionName);
//serverHub.Invoke("Publish", "dit is een eerste test", "Product").Wait();
}
public Task Unsubscribe(string collectionName)
{
return Clients[collectionName].leave(Context.ConnectionId);
}
public void Publish(string message, string collection)
{
Clients[collection].flush("bericht: " + message + collection);
}
}
}
}
Sounds like you are describing SignalR.
What is ASP.NET SignalR?
ASP.NET SignalR is a new library for ASP.NET
developers that makes it incredibly simple to add real-time web
functionality to your applications. What is "real-time web"
functionality? It's the ability to have your server-side code push
content to the connected clients as it happens, in real-time.
You may have heard of WebSockets, a new HTML5 API that enables
bi-directional communication between the browser and server. SignalR
will use WebSockets under the covers when it's available, and
gracefully fallback to other techniques and technologies when it
isn't, while your application code stays the same.
SignalR also provides a very simple, high-level API for doing server
to client RPC (call JavaScript functions in your clients' browsers
from server-side .NET code) in your ASP.NET application, as well as
adding useful hooks for connection management, e.g. connect/disconnect
events, grouping connections, authorization.
What can you do with ASP.NET SignalR?
SignalR can be used to add any
sort of "real-time" web functionality to your ASP.NET application.
While chat is often used as an example, you can do a whole lot more.
Any time a user refreshes a web page to see new data, or the page
implements Ajax long polling to retrieve new data, is candidate for
using SignalR.
What it basicly does, is giving you access to the client AND server side functions in both directions, a simple example of it's usage can be found on the asp.net website which will give you a good idea on how to use it and what it's capable of doing.
You want to be using something like signal IR to do that, its what its designed for, essentially you are describing real time functionality;
Signal IR can be found here and has a great section on javascript in its wiki
In particular you probably want to take a look at Hubs