WebRTC - remove/reduce latency between devices that are sharing their videos stream? - javascript

i'm sorry for not posting any code, but i'm trying learning more about latency and webRTC , what is the best way to remove latency between two or more devices that are sharing a video stream?
Or , anyway, to reduce as much as possible latency ?
Thinking about it, i imaged to just put the device's clocks to the same time so delay the requests from server, is this the real trick?

Latency is a function of the number of steps on the path between the source (microphone, camera) and the output (speakers, screen).
Changing clocks will have zero impact on latency.
The delays you have include:
device internal delays - waiting for screen vsync, etc...; nothing much to be done here
device interface delays - a shorter cable will save you some time, but nothing measureable
software delays - your operating system and browser; you might be able to do something here, but you probably don't want to, trust that your browser maker is doing what they can
encoding delays - a faster processor helps a little here, but the biggest concern is for things like audio, the encoder has to wait for a certain amount of audio to accumulate before it can even start encoding; by default, this is 20ms, so not much; eventually, you will be able to request shorter ptimes (what the control is called) for Opus, but it's early days yet
decoding delays - again, a faster processor helps a little, but there's not much to be done
jitter buffer delay - audio in particular requires a little bit of extra delay at a receiver so that any network jitter (fluctuations in the rate at which packets are delivered) doesn't cause gaps in audio; this is usually outside of your control, but that doesn't mean that it's completely impossible
resynchronization delays - when you are playing synchronized audio and video, if one gets delayed for any reason, playback of the other might be delayed so that the two can be played out together; this should be fairly small, and is likely outside of your control
network delays - this is where you can help, probably a lot, depending on your network configuration
You can't change the physics of the situation when it comes to the distance between two peers, but there are a lot of network characteristics that can change the actual latency:
direct path - if, for any reason, you are using a relay server, then your latency will suffer, potentially a lot, because every packet doesn't travel directly, it takes a detour via the relay server
size of pipe - trying to cram high resolution video down a small pipe can work, but getting big intra-frames down a tiny pipe can add some extra delays
TCP - if UDP is disabled, falling back to TCP can have a pretty dire impact on latency, mainly due to a special exposure to packet loss, which requires retransmissions and causes subsequent packets to be delayed until the retransmission completes (this is called head of line blocking); this can also happen in certain perverse NAT scenarios, even if UDP is enabled in theory, though most likely this will result in a relay server being used
larger network issues - maybe your peers are on different ISPs and those ISPs have poor peering arrangements, so that packets take a suboptimal route between peers; traceroute might help you identify where things are going
congestion - maybe you have some congestion on the network; congestion tends to cause additional delay, particularly when it is caused by TCP, because TCP tends to fill up tail-drop queues in routers, forcing your packets to wait extra time; you might also be causing self-congestion if you are using data channels, the congestion control algorithm there is the same one that TCP uses by default, which is bad for real-time latency
I'm sure that's not a complete taxonomy, but this should give you a start.

I don't think you can do something to enhance the latency besides being on a better network with higher bandwidth and latency. If you're on the same network or wifi, there should be quite little latency.
I think the latency also is higher when your devices have little processing power, so they don't decode the video as fast, but there should be not much you can do about that it happens all in the Browser.
What you could do is try different codecs. Therefore you have to manipulate the SDP before it is sent out and reorder or remove the codecs in the m=audio or the m=video line. (But there's not much to choose from in video codecs, just VP8)
You can view the performance of the codecs and network on the tool that comes with chrome:
chrome://webrtc-internals/
just type that into the URL-bar.

Related

AudioElement/WebAudio/WebSpeech and output latency

I tried three methods of playing back/generating audio on a Mac and on Android devices. The three methods are
loading a file into an AudioElement (via <audio>),
creating a sound with the WebAudio API,
using the WebSpeech API to generate speech.
Methods 1 and 2 have a considerable latency (i.e. time between call to play and perceivable audio) before they can be heard on my Android devices (though one of the devices appears to have less latency than the other). No latency can be perceived on my Mac.
Method 3 doesn't seem to have any latency at all.
The latency of Method 2, WebAudio API, can be mitigated by subtracting a calculated output latency from the desired starting time. The formula is:
outputLatency = audioContext.currentTime - audioContext.getOutputTimestamp().contextTime.
It does more or less remove the latency from one of my Android devices, but not from the other.
The improvement I saw after using above formula is the main reason I suspected the problem to be output latency.
According to my research, output latency has at least in part to do with the hardware so WebSpeech being completely unaffected doesn't make a lot of sense in my opinion.
Is what I am observing here output latency?
If yes, why is WebSpeech not affected by this?
If no, where does the latency come from?

Node server randomly spikes to 100% then crashes. How to diagnose?

I'm making an online browser game with websockets and a node server and if I have around 20-30 players, the CPU is usually around 2% and RAM at 10-15%. I'm just using a cheap Digital Ocean droplet to host it.
However, every 20-30 minutes it seems, the server CPU usage will spike to 100% for 10 seconds, and then finally crash. Up until that moment, the CPU usually hovering around 2% and the game is running very smoothly.
I can't tell for the life of me what is triggering this as there are no errors in the logs and nothing in the game that I can see causes it. Just seems to be a random event that brings the server down.
There are also some smaller spikes as well that don't bring the server down, but soon resolve themselves. Here's an image:
I don't think I'm blocking the event loop anywhere and I don't have any execution paths that seem to be long running. The packets to and from the server are usually two per second per user, so not much bandwidth used at all. And the server is mostly just a relay with little processing of packets other than validation so I'm not sure what code path could be so intensive.
What can I do to profile this and find out where to begin in how to investigate what are causing these spikes? I'd like to imagine there's some code path I forgot about that is surprisingly slow under load or maybe I'm missing a node flag that would resolve it but I don't know.
I think I might have figured it out.
I'm using mostly websockets for my game and I was running htop and noticed that if someone sends large packets (performing a ton of actions in a short amount of time) then the CPU spikes to 100%. I was wondering why that was when I remembered I was using a binary-packer to reduce bandwidth usage.
I tried changing the parser to JSON instead so as to not compress and pack the packets and regardless of how large the packets were the CPU usage stayed at 2% the entire time.
So I think what was causing the crash was when one player would send a lot of data in a short amount of time and the server would be overwhelmed with having to pack all of it and send it out in time.
This may not be the actual answer but it's at least something that needs to be fixed. Thankfully the game uses very little bandwidth as it is and bandwidth is not the bottleneck so I may just leave it as JSON.
The only problem is that with JSON encoding that users can read the packets in the Chrome developer console network tab which I don't like.. Makes it a lot easier to find out how the game works and potentially find cheats/exploits..

Report browser crash to server

I want to add a feature to my web app which may cause the browser to run out of memory and crash for computers low on RAM. Is there any way that I can report when this happens to the server to determine how many users this happens to?
EDIT:
The specific problem is that the amount of memory used scales with the amount the user is zoomed in. To cope with this we will limit how much users can zoom in. We have an idea of how much to limit it based on in-house tests, but we want to know if the browser tab crashes for any users after we release this.
We could reduce the memory consumption with zooming, but doing so would entail a tremendous amount of refactoring, and would probably have negative, and possibly unacceptable performance losses.
Note that this is not "we know this happens, but we want to know how much", and more of "we don't think this will happen, but we want to know if it does."
There is no real way to get this information. If a server stops receiving data (sent by a websocket or such) via something such as a crash it has no way of telling what caused the stop. Either a browser tab being purposefully closed or a crash.
You could potentially check speed of a loop and report it to the server - such as frame-rate in a requestAnimationFrame - and check for trends if you are a fall before a close - but this is laden with pitfalls and potentially adds work to the client that may only add to your intensive code.

How to measure connection speed when streaming HTML5 video?

I have a website where the users are going to watch videos. I want to serve them the best quality possible without it pausing for buffering, and so I need a way to measure the connection speed.
Now I've set up some code that as soon as possible figures out where the buffer is, waits for three seconds and then measures it again. And since we know the bit rate, it can calculate the connection speed.
The problem is that the browser is throttling the buffer....
So if the user has a fast connection, it buffers for only one second before it slows down the buffering a lot, since there is no need to buffer at max speed anyway. So since I'm measuring for three seconds, it gives a bit rate that is way to low than the connection is in reality. However, if the connection is about the same speed as the video bit rate, it works perfectly.
How can this be solved? Google has made it over at YouTube, so it should be possible.

Is setInterval CPU intensive?

I read somewhere that setInterval is CPU intensive. I created a script that uses setInterval and monitored the CPU usage but didn't notice a change. I want to know if there is something I missed.
What the code does is check for changes to the hash in the URL (content after #) every 100 milliseconds and if it has changed, load a page using AJAX. If it has not changed, nothing happens. Would there be any CPU issues with that.
I don't think setInterval is inherently going to cause you significant performance problems. I suspect the reputation may come from an earlier era, when CPUs were less powerful.
There are ways that you can improve the performance, however, and it's probably wise to do them:
Pass a function to setInterval, rather than a string.
Have as few intervals set as possible.
Make the interval durations as long as possible.
Have the code running each time as short and simple as possible.
Don't optimise prematurely -- don't make life difficult for yourself when there isn't a problem.
One thing, however, that you can do in your particular case is to use the onhashchange event, rather than timeouts, in browsers that support it.
I would rather say it's quite the opposite. Using setTimeout and setInterval correctly, can drastical reduce the browsers CPU usage. For instance, using setTimeout instead of using a for or while loop will not only reduce the intensity of CPU usage, but will also guarantee that the browser has a chance to update the UI queue more often. So long running processes will not freeze and lockup the user experience.
But in general, using setInterval really like a lot on your site may slow down things. 20 simultaneously running intervals with more or less heavy work will affect the show. And then again.. you really can mess up any part I guess that is not a problem of setInterval.
..and by the way, you don't need to check the hash like that. There are events for that:
onhashchange
will fire when there was a change in the hash.
window.addEventListener('hashchange', function(e) {
console.log('hash changed, yay!');
}, false);
No, setInterval is not CPU intensive in and of itself. If you have a lot of intervals running on very short cycles (or a very complex operation running on a moderately long interval), then that can easily become CPU intensive, depending upon exactly what your intervals are doing and how frequently they are doing it.
I wouldn't expect to see any issues with checking the URL every 100 milliseconds on an interval, though personally I would increase the interval to 250 milliseconds, just because I don't expect that the difference between the two would be noticeable to a typical user and because I generally try to use the longest timeout intervals that I think I can get away with, particularly for things that are expected to result in a no-op most of the time.
There's a bit of marketing going there under the "CPU intensive" term. What it really means is "more CPU intensive than some alternatives". It's not "CPU intensive" as in "uses a whole lot of CPU power like a game or a compression algorithm would do".
Explanation :
Once the browser has yielded control it relies on an interrupt from
the underlying operating system and hardware to receive control and
issue the JavaScript callback. Having longer durations between these
interrupts allows hardware to enter low power states which
significantly decreases power consumption.
By default the Microsoft Windows operating system and Intel based
processors use 15.6ms resolutions for these interrupts (64 interrupts
per second). This allows Intel based processors to enter their lowest
power state. For this reason web developers have traditionally only
been able to achieve 64 callbacks per second when using setTimeout(0)
when using HTML4 browsers including earlier editions of Internet
Explorer and Mozilla Firefox.
Over the last two years browsers have attempted to increase the number
of callbacks per second that JavaScript developers can receive through
the setTimeout and setInterval API’s by changing the power conscious
Windows system settings and preventing hardware from entering low
power states. The HTML5 specification has gone to the extreme of
recommending 250 callbacks per second. This high frequency can result
in a 40% increase in power consumption, impacting battery life,
operating expenses, and the environment. In addition, this approach
does not address the core performance problem of improving CPU
efficiency and scheduling.
From http://ie.microsoft.com/testdrive/Performance/setImmediateSorting/Default.html
In your case there will not be any issue. But if your doing some huge animations in canvas or working with webgl , then there will be some CPU issues, so for that you can use requestAnimationFrame.
Refer this link About requestAnimationFrame
Function time > interval time is bad, you can't know when cpu hiccups or is slow one and it stacks on top of ongoing functions until pc freezes. Use settimeout or even better, process.nextick using a callback inside a settimeout.

Categories

Resources