How to measure connection speed when streaming HTML5 video? - javascript

I have a website where the users are going to watch videos. I want to serve them the best quality possible without it pausing for buffering, and so I need a way to measure the connection speed.
Now I've set up some code that as soon as possible figures out where the buffer is, waits for three seconds and then measures it again. And since we know the bit rate, it can calculate the connection speed.
The problem is that the browser is throttling the buffer....
So if the user has a fast connection, it buffers for only one second before it slows down the buffering a lot, since there is no need to buffer at max speed anyway. So since I'm measuring for three seconds, it gives a bit rate that is way to low than the connection is in reality. However, if the connection is about the same speed as the video bit rate, it works perfectly.
How can this be solved? Google has made it over at YouTube, so it should be possible.

Related

How to get the max number videos a web page can play, in a short time

I want to open 16 live stream 1080P videos at the same time, but in some customers' computers, it will cause the browser to crash.
In order to avoid this, I have to get browser performance before playing that many. Now I play one video, and record the current page refresh rate by 'requestAnimationFrame', if the refresh rate is over 24, destroy previously created video, then play two videos, continue until the refresh rate less than 24. The last result is the number of video I can play, but the method cost too much time, is there a method can control the detection time within 3 seconds?
By the way, I use the WebRTC to live stream.
There will not be a simple answer to this unfortunately - different streams and codecs will have different loads on different browsers, or even on the same browser on different systems.
Video playback support is tied closely to the capabilities of the platform the browser is running on, typically - if the codec is supported in HW for example, playback will usually require less CPU, battery etc. Even with HW support when you have so many videos most platforms will struggle.
One way to work around this problem is to combine all your video feeds into a single stream on the server side.
You can then display this combine video so that if someone clicks on a particular part of the video, corresponding to a particular stream, you can switch to just that stream or maybe have a separate window to show the selected stream in larger or better quality.
It puts more work on the server side but this may be a treasonable tradeoff depending on your needs.

What is the AudioContext currentTime for the first sample recorded?

Using the Web Audio API, I create a bufferSource, and use a new MediaRecorder to record at the same time. I'm recording the sound coming out of my speakers with the built in microphone.
If I play back the original and the new recording, there is a significant lag between the two. (It sounds like about 200ms to me.) If I console.log the value of globalAudioCtx.currentTime at the point of calling the two "start" methods, the two numbers are exactly the same. The values of Date.now() are also exactly the same.
Where is this latency introduced? The latency due to speed of sound is about 1000x as small as what I am hearing.
In short, how can I get these two samples to play back at exactly the same time?
I'm working in Chrome on Linux.
Where is this latency introduced?
Both on the playback and recording.
Your sound card has a buffer, and software has to write audio to that buffer in small chunks at a time. If the software can't keep up, choppy audio is heard. So, buffer sizes are set to be large enough to prevent that from happening.
The same is true on the recording end. If a buffer isn't large enough, recorded audio data would be lost if the software wasn't able to read from that buffer fast enough, causing choppy and lost audio.
Browsers aren't using the lowest latency mode of operation with your sound card. There are some tweaks you can apply (such as using WASAPI and exclusive mode on Windows with Chrome), but you're at the mercy of the browser developers who didn't design this with folks like you and I in mind.
No matter how low you go though in buffer size, there is still going to be lag. That's the nature of computer-based digital audio.
how can I get these two samples to play back at exactly the same time?
You'll have to delay one of the samples to bring them back in sync.

Node server randomly spikes to 100% then crashes. How to diagnose?

I'm making an online browser game with websockets and a node server and if I have around 20-30 players, the CPU is usually around 2% and RAM at 10-15%. I'm just using a cheap Digital Ocean droplet to host it.
However, every 20-30 minutes it seems, the server CPU usage will spike to 100% for 10 seconds, and then finally crash. Up until that moment, the CPU usually hovering around 2% and the game is running very smoothly.
I can't tell for the life of me what is triggering this as there are no errors in the logs and nothing in the game that I can see causes it. Just seems to be a random event that brings the server down.
There are also some smaller spikes as well that don't bring the server down, but soon resolve themselves. Here's an image:
I don't think I'm blocking the event loop anywhere and I don't have any execution paths that seem to be long running. The packets to and from the server are usually two per second per user, so not much bandwidth used at all. And the server is mostly just a relay with little processing of packets other than validation so I'm not sure what code path could be so intensive.
What can I do to profile this and find out where to begin in how to investigate what are causing these spikes? I'd like to imagine there's some code path I forgot about that is surprisingly slow under load or maybe I'm missing a node flag that would resolve it but I don't know.
I think I might have figured it out.
I'm using mostly websockets for my game and I was running htop and noticed that if someone sends large packets (performing a ton of actions in a short amount of time) then the CPU spikes to 100%. I was wondering why that was when I remembered I was using a binary-packer to reduce bandwidth usage.
I tried changing the parser to JSON instead so as to not compress and pack the packets and regardless of how large the packets were the CPU usage stayed at 2% the entire time.
So I think what was causing the crash was when one player would send a lot of data in a short amount of time and the server would be overwhelmed with having to pack all of it and send it out in time.
This may not be the actual answer but it's at least something that needs to be fixed. Thankfully the game uses very little bandwidth as it is and bandwidth is not the bottleneck so I may just leave it as JSON.
The only problem is that with JSON encoding that users can read the packets in the Chrome developer console network tab which I don't like.. Makes it a lot easier to find out how the game works and potentially find cheats/exploits..

WebRTC - remove/reduce latency between devices that are sharing their videos stream?

i'm sorry for not posting any code, but i'm trying learning more about latency and webRTC , what is the best way to remove latency between two or more devices that are sharing a video stream?
Or , anyway, to reduce as much as possible latency ?
Thinking about it, i imaged to just put the device's clocks to the same time so delay the requests from server, is this the real trick?
Latency is a function of the number of steps on the path between the source (microphone, camera) and the output (speakers, screen).
Changing clocks will have zero impact on latency.
The delays you have include:
device internal delays - waiting for screen vsync, etc...; nothing much to be done here
device interface delays - a shorter cable will save you some time, but nothing measureable
software delays - your operating system and browser; you might be able to do something here, but you probably don't want to, trust that your browser maker is doing what they can
encoding delays - a faster processor helps a little here, but the biggest concern is for things like audio, the encoder has to wait for a certain amount of audio to accumulate before it can even start encoding; by default, this is 20ms, so not much; eventually, you will be able to request shorter ptimes (what the control is called) for Opus, but it's early days yet
decoding delays - again, a faster processor helps a little, but there's not much to be done
jitter buffer delay - audio in particular requires a little bit of extra delay at a receiver so that any network jitter (fluctuations in the rate at which packets are delivered) doesn't cause gaps in audio; this is usually outside of your control, but that doesn't mean that it's completely impossible
resynchronization delays - when you are playing synchronized audio and video, if one gets delayed for any reason, playback of the other might be delayed so that the two can be played out together; this should be fairly small, and is likely outside of your control
network delays - this is where you can help, probably a lot, depending on your network configuration
You can't change the physics of the situation when it comes to the distance between two peers, but there are a lot of network characteristics that can change the actual latency:
direct path - if, for any reason, you are using a relay server, then your latency will suffer, potentially a lot, because every packet doesn't travel directly, it takes a detour via the relay server
size of pipe - trying to cram high resolution video down a small pipe can work, but getting big intra-frames down a tiny pipe can add some extra delays
TCP - if UDP is disabled, falling back to TCP can have a pretty dire impact on latency, mainly due to a special exposure to packet loss, which requires retransmissions and causes subsequent packets to be delayed until the retransmission completes (this is called head of line blocking); this can also happen in certain perverse NAT scenarios, even if UDP is enabled in theory, though most likely this will result in a relay server being used
larger network issues - maybe your peers are on different ISPs and those ISPs have poor peering arrangements, so that packets take a suboptimal route between peers; traceroute might help you identify where things are going
congestion - maybe you have some congestion on the network; congestion tends to cause additional delay, particularly when it is caused by TCP, because TCP tends to fill up tail-drop queues in routers, forcing your packets to wait extra time; you might also be causing self-congestion if you are using data channels, the congestion control algorithm there is the same one that TCP uses by default, which is bad for real-time latency
I'm sure that's not a complete taxonomy, but this should give you a start.
I don't think you can do something to enhance the latency besides being on a better network with higher bandwidth and latency. If you're on the same network or wifi, there should be quite little latency.
I think the latency also is higher when your devices have little processing power, so they don't decode the video as fast, but there should be not much you can do about that it happens all in the Browser.
What you could do is try different codecs. Therefore you have to manipulate the SDP before it is sent out and reorder or remove the codecs in the m=audio or the m=video line. (But there's not much to choose from in video codecs, just VP8)
You can view the performance of the codecs and network on the tool that comes with chrome:
chrome://webrtc-internals/
just type that into the URL-bar.

Perfect Synchronization with Web Audio API

I'm working on a simple audio visualization application that uses a Web Audio API analyzer to pull frequency data, as in this example. Expectedly, the more visual elements I add to my canvases, the more latency there is between the audio and the yielded visual results.
Is there a standard approach to accounting for this latency? I can imagine a lookahead technique that buffers the upcoming audio data. I could work with synchronizing the JavaScript and Web Audio clocks, but I'm convinced that there's a much more intuitive answer. Perhaps it is as straightforward as playing the audio aloud with a slight delay (although this is not nearly as comprehensive).
The dancer.js library seems to have the same problem (always has a very subtle delay), whereas other applications seem to have solved the lag issue entirely. I have insofar been unable to pinpoint the technical difference. SoundJS seems to handle this a bit better, but it would be nice to build from scratch.
Any methodologies to point me in the right direction are much appreciated.
I think you will find some answers to precise audio timing in this article:
http://www.html5rocks.com/en/tutorials/audio/scheduling/
SoundJS uses this approach to enable smooth looping, but still uses javascript timers for delayed playback. This may not help you sync the audio timing with the animation timing. When I built the music visualizer example for SoundJS I found I had to play around with the different values for fft size and tick frequency to get good performance. I also needed to cache a single shape and reuse it with scaling to have performant graphics.
Hope that helps.
I'm concerned when you say the more visual elements you add to your canvases, the more latency you get in audio. That shouldn't really happen quite like that. Are your canvases being animated using requestAnimationFrame? What's your frame rate like?
You can't, technically speaking, synchronize the JS and Web Audio clocks - the Web Audio clock is the audio hardware clock, which is literally running off a different clock crystal than the system clock (on many systems, at least). The vast majority of web audio (ScriptProcessorNodes being the major exception) shouldn't have additional latency introduced when your main UI thread becomes a bit more congested).
If the problem is the analysis just seems to lag (i.e. the visuals are consistently "behind" the audio), it could just be the inherent lag in the FFT processing. You can reduce the FFT size in the Analyser, although you'll get less definition then; to fake up fixing it, you can also run all the audio through a delay node to get it to re-sync with the visuals.
Also, you may find that the "smoothing" parameter on the Analyser makes it less time-precise - try turning that down.

Categories

Resources