I tried three methods of playing back/generating audio on a Mac and on Android devices. The three methods are
loading a file into an AudioElement (via <audio>),
creating a sound with the WebAudio API,
using the WebSpeech API to generate speech.
Methods 1 and 2 have a considerable latency (i.e. time between call to play and perceivable audio) before they can be heard on my Android devices (though one of the devices appears to have less latency than the other). No latency can be perceived on my Mac.
Method 3 doesn't seem to have any latency at all.
The latency of Method 2, WebAudio API, can be mitigated by subtracting a calculated output latency from the desired starting time. The formula is:
outputLatency = audioContext.currentTime - audioContext.getOutputTimestamp().contextTime.
It does more or less remove the latency from one of my Android devices, but not from the other.
The improvement I saw after using above formula is the main reason I suspected the problem to be output latency.
According to my research, output latency has at least in part to do with the hardware so WebSpeech being completely unaffected doesn't make a lot of sense in my opinion.
Is what I am observing here output latency?
If yes, why is WebSpeech not affected by this?
If no, where does the latency come from?
I'm making an online browser game with websockets and a node server and if I have around 20-30 players, the CPU is usually around 2% and RAM at 10-15%. I'm just using a cheap Digital Ocean droplet to host it.
However, every 20-30 minutes it seems, the server CPU usage will spike to 100% for 10 seconds, and then finally crash. Up until that moment, the CPU usually hovering around 2% and the game is running very smoothly.
I can't tell for the life of me what is triggering this as there are no errors in the logs and nothing in the game that I can see causes it. Just seems to be a random event that brings the server down.
There are also some smaller spikes as well that don't bring the server down, but soon resolve themselves. Here's an image:
I don't think I'm blocking the event loop anywhere and I don't have any execution paths that seem to be long running. The packets to and from the server are usually two per second per user, so not much bandwidth used at all. And the server is mostly just a relay with little processing of packets other than validation so I'm not sure what code path could be so intensive.
What can I do to profile this and find out where to begin in how to investigate what are causing these spikes? I'd like to imagine there's some code path I forgot about that is surprisingly slow under load or maybe I'm missing a node flag that would resolve it but I don't know.
I think I might have figured it out.
I'm using mostly websockets for my game and I was running htop and noticed that if someone sends large packets (performing a ton of actions in a short amount of time) then the CPU spikes to 100%. I was wondering why that was when I remembered I was using a binary-packer to reduce bandwidth usage.
I tried changing the parser to JSON instead so as to not compress and pack the packets and regardless of how large the packets were the CPU usage stayed at 2% the entire time.
So I think what was causing the crash was when one player would send a lot of data in a short amount of time and the server would be overwhelmed with having to pack all of it and send it out in time.
This may not be the actual answer but it's at least something that needs to be fixed. Thankfully the game uses very little bandwidth as it is and bandwidth is not the bottleneck so I may just leave it as JSON.
The only problem is that with JSON encoding that users can read the packets in the Chrome developer console network tab which I don't like.. Makes it a lot easier to find out how the game works and potentially find cheats/exploits..
I want to add a feature to my web app which may cause the browser to run out of memory and crash for computers low on RAM. Is there any way that I can report when this happens to the server to determine how many users this happens to?
EDIT:
The specific problem is that the amount of memory used scales with the amount the user is zoomed in. To cope with this we will limit how much users can zoom in. We have an idea of how much to limit it based on in-house tests, but we want to know if the browser tab crashes for any users after we release this.
We could reduce the memory consumption with zooming, but doing so would entail a tremendous amount of refactoring, and would probably have negative, and possibly unacceptable performance losses.
Note that this is not "we know this happens, but we want to know how much", and more of "we don't think this will happen, but we want to know if it does."
There is no real way to get this information. If a server stops receiving data (sent by a websocket or such) via something such as a crash it has no way of telling what caused the stop. Either a browser tab being purposefully closed or a crash.
You could potentially check speed of a loop and report it to the server - such as frame-rate in a requestAnimationFrame - and check for trends if you are a fall before a close - but this is laden with pitfalls and potentially adds work to the client that may only add to your intensive code.
I have a website where the users are going to watch videos. I want to serve them the best quality possible without it pausing for buffering, and so I need a way to measure the connection speed.
Now I've set up some code that as soon as possible figures out where the buffer is, waits for three seconds and then measures it again. And since we know the bit rate, it can calculate the connection speed.
The problem is that the browser is throttling the buffer....
So if the user has a fast connection, it buffers for only one second before it slows down the buffering a lot, since there is no need to buffer at max speed anyway. So since I'm measuring for three seconds, it gives a bit rate that is way to low than the connection is in reality. However, if the connection is about the same speed as the video bit rate, it works perfectly.
How can this be solved? Google has made it over at YouTube, so it should be possible.
I read somewhere that setInterval is CPU intensive. I created a script that uses setInterval and monitored the CPU usage but didn't notice a change. I want to know if there is something I missed.
What the code does is check for changes to the hash in the URL (content after #) every 100 milliseconds and if it has changed, load a page using AJAX. If it has not changed, nothing happens. Would there be any CPU issues with that.
I don't think setInterval is inherently going to cause you significant performance problems. I suspect the reputation may come from an earlier era, when CPUs were less powerful.
There are ways that you can improve the performance, however, and it's probably wise to do them:
Pass a function to setInterval, rather than a string.
Have as few intervals set as possible.
Make the interval durations as long as possible.
Have the code running each time as short and simple as possible.
Don't optimise prematurely -- don't make life difficult for yourself when there isn't a problem.
One thing, however, that you can do in your particular case is to use the onhashchange event, rather than timeouts, in browsers that support it.
I would rather say it's quite the opposite. Using setTimeout and setInterval correctly, can drastical reduce the browsers CPU usage. For instance, using setTimeout instead of using a for or while loop will not only reduce the intensity of CPU usage, but will also guarantee that the browser has a chance to update the UI queue more often. So long running processes will not freeze and lockup the user experience.
But in general, using setInterval really like a lot on your site may slow down things. 20 simultaneously running intervals with more or less heavy work will affect the show. And then again.. you really can mess up any part I guess that is not a problem of setInterval.
..and by the way, you don't need to check the hash like that. There are events for that:
onhashchange
will fire when there was a change in the hash.
window.addEventListener('hashchange', function(e) {
console.log('hash changed, yay!');
}, false);
No, setInterval is not CPU intensive in and of itself. If you have a lot of intervals running on very short cycles (or a very complex operation running on a moderately long interval), then that can easily become CPU intensive, depending upon exactly what your intervals are doing and how frequently they are doing it.
I wouldn't expect to see any issues with checking the URL every 100 milliseconds on an interval, though personally I would increase the interval to 250 milliseconds, just because I don't expect that the difference between the two would be noticeable to a typical user and because I generally try to use the longest timeout intervals that I think I can get away with, particularly for things that are expected to result in a no-op most of the time.
There's a bit of marketing going there under the "CPU intensive" term. What it really means is "more CPU intensive than some alternatives". It's not "CPU intensive" as in "uses a whole lot of CPU power like a game or a compression algorithm would do".
Explanation :
Once the browser has yielded control it relies on an interrupt from
the underlying operating system and hardware to receive control and
issue the JavaScript callback. Having longer durations between these
interrupts allows hardware to enter low power states which
significantly decreases power consumption.
By default the Microsoft Windows operating system and Intel based
processors use 15.6ms resolutions for these interrupts (64 interrupts
per second). This allows Intel based processors to enter their lowest
power state. For this reason web developers have traditionally only
been able to achieve 64 callbacks per second when using setTimeout(0)
when using HTML4 browsers including earlier editions of Internet
Explorer and Mozilla Firefox.
Over the last two years browsers have attempted to increase the number
of callbacks per second that JavaScript developers can receive through
the setTimeout and setInterval API’s by changing the power conscious
Windows system settings and preventing hardware from entering low
power states. The HTML5 specification has gone to the extreme of
recommending 250 callbacks per second. This high frequency can result
in a 40% increase in power consumption, impacting battery life,
operating expenses, and the environment. In addition, this approach
does not address the core performance problem of improving CPU
efficiency and scheduling.
From http://ie.microsoft.com/testdrive/Performance/setImmediateSorting/Default.html
In your case there will not be any issue. But if your doing some huge animations in canvas or working with webgl , then there will be some CPU issues, so for that you can use requestAnimationFrame.
Refer this link About requestAnimationFrame
Function time > interval time is bad, you can't know when cpu hiccups or is slow one and it stacks on top of ongoing functions until pc freezes. Use settimeout or even better, process.nextick using a callback inside a settimeout.