WebGL: detect, when out of memory - javascript

I am making an app, which needs a lot of GPU memory sometimes (many huge textures). I could make a system, where I keep frequently used textures in GPU memory, and I upload and delete the rest, only when they are needed.
In general, there are always more textures than the GPU memory, and the more memory I can use - the faster my program is. I don't want to restrict myself e.g. to 100 MB or 1 GB of memory, when there could be 4x more memory free to use. But if I try to allocate too much, the browser will kill my program.
I see, that in WebGL, there is no direct way to tell, how much memory is available. What would be your strategy, how to solve such issue?

Related

Do processes get slower as amount of free memory diminishes?

I've always been under the impression that as long as you have free memory, whether its 100% or 10%, the speed of your processes should not be affected.
However, I recently ran into a situation where it seems that my processes get a lot slower when it uses up a greater percentage of the memory available.
It could be a problem with the code itself, but I'm hoping to get a quick sanity check that I haven't been living a lie before delving deeper into the code iteself.
It all really depends upon how the app is coded and what it is doing. For some apps, it won't make any difference whether free memory is 10% or 100% as long as there's enough for it to do its job.
For other apps, they may encounter memory fragmentation, they may cause disk swapping, they may even adjust their own behavior because of less available memory (using smaller buffers, forcing data to disk, etc...). In a garbage collected system (like nodejs), a lower memory condition may cause more frequent garbage collection too.
The single biggest performance impact from running lower on memory will be if the app causes the OS to page memory to disk. This is where the virtual memory being used exceeds the actual physical memory and the OS has to substitute some disk space for memory that is allocated. The OS tries to swap memory to disk that hasn't been accessed recently in the hopes that it won't be needed again soon, but sometimes that just doesn't work very efficiently and you get a lot of hard disk thrashing, constantly reading/write memory to/from disk. Since disks are thousands of times slower than physical memory, this can massively slow things down.
There are also cases of app design where some operations in an app like Photoshop that will simply run faster with more memory available to use because the algorithms will adapt to use the larger amount of memory to make the operation run faster when working on large objects. A nodejs app or library could be doing the same thing. For example an image processing algorithm may be designed to work on images larger than will fit in memory so it has to decide how much memory is "safe" to allocate and then work on the image in chunks. With a smaller amount of memory available, the work gets done less efficiently in smaller chunks.
A more common reason why things get slower over time is because of some sort of internal fragmentation or leaks that make regular housekeeping chores (like allocating memory) less efficient. This may occur either at the heap level or at the app level. This is why some admins schedule long running processes (like servers) to be automatically restarted every once in awhile - to clear up any of this fragmentation or small leaks and regularly start afresh.
If it's a major problem, extensive debugging may be able to explain where any major impacts are coming from, but this is not trivial debugging as it involves lots of measuring, gathering data, adjusting what you're looking at based on what you find, etc... all while trying to not influence the very thing you're trying to find/measure.

Should I worry about high RSS in node if the heaptotal / heapused is low?

So I'm a starting programmer and I honestly can't find any information about what node's rss means. Everything just says its the total amount of memory allocated to the process. great! so is that a problem?
I'm writing a small discord bot in node and I noticed my rss going all over the place.
45.5 MB used for shard 1 (10.1 MB HeapUsed, 16.9 MB HeapTotal)
37.2 MB used for shard 1 (7.1 MB HeapUsed, 9.6 MB HeapTotal)
75.3 MB used for shard 1 (7.2 MB HeapUsed, 9.6 MB HeapTotal)
These are measurements about 5 seconds apart after starting up the process. It stays at that ~75MB mark.
I'm wondering if I should actually worry about this memory usage or if it's totally fine. Say I run this program on a host with only 2GB of ram. Would having an RSS of 1800MB be bad or would it just cap itself there and only improve the heapused/heaptotal?
Also is there any way to check what the process is assigning memory to in node?
My question really just is, Should I worry about the RSS or should I just ignore it and only look at the heap?
RSS is the total portion of the memory allocated for the running node process at hand system wise.
This means that even though heap portions of the memory are low, total rss could grow arbitrarily high unless the system where the node process resides decides to free it -possibly because it's running out of memory for other processes-
Most of the time, this is a "normal" behaviour of node, although some discussions have marked this as "a possible hidden memory leak" within node itself.
When the time comes, system should signal the process to free the unused memory that has being "reserved" for node.
Other times though, rss can grow bigger due to binary dependencies on your project. Let's say, those having memory leaks within them, system wouldn't be able to free that memory when the system requires so.
TL;DR
Unless you're having an specific problem regarding the rss memory space allocation -in which case, you'll definitely know-, don't panic about this behaviour, as most of the time the underlying system should take care of it.
Pay attention to your binary dependencies though. Those will not show on the heap size reported by V8's engine, and in case of a memory leak within them, that portion of memory will no be able to be freed given the time has come.
source

Is it possible to accelerate the webgl matrix Multiplication through pnacl?

The poor performance of matrix multiplication in javascript is an obstacle for high performance webgl. So I am thinking about using pnacl to accelerate it.
Ideally, I'd like to pass the ArrayBuffer(Float32Array) and the matrix to pnacl, then use the native code to finish the multiplication and update the value in the buffer, at last notify the page(javascript).
But I am doubt if the buffer memory can be shared for pnacl and page javascript?
If not, I have to pass the buffer back to client, I am not sure the influence of such operation to performance.
Any suggestion will be appreciated!
PPAPI passes the ArrayBuffer using shared memory, so copying will be minimal.
https://code.google.com/p/chromium/codesearch#chromium/src/ppapi/proxy/plugin_array_buffer_var.h
However, PNaCl plugins run in a different (plugin) process in Chrome, so latency (time to send the message to the plugin and receive an answer) may negate any performance improvement from native code.
As with all optimization questions, you should profile your code to see if the matrix multiplication is even an issue. If it is, bbudge is right, you'll likely lose any performance gains by having to pass the array to PNaCl and back to JavaScript.
asm.js code runs in the same process and stack as JavaScript, so you may see benefits by using it. Take a look at http://jsperf.com/matrix-multiplication-with-asm-js/6. Unfortunately, there are no guarantees that asm.js will be performant on all browsers. If a browser doesn't support asm.js directly, it will be executed as plain JavaScript, which may end up being slower.
When WebAssembly is available, that will likely be your best bet.

Should I worry about memory consumption of a 100+ module html5 app?

Say I have an MVC-ish html5 app that consists of 100+ small modules. I'd like it to run as smooth as possible even on a tablet or a smartphone.
Since only a handful of the 100+ modules are in use simultaneously and I'd say half of them aren't even used during an ordinary session with the app, loading them as a single concatenated js file and keeping it all in memory feels kind of icky.
I currently use CujoJS curl, which is an AMD loader. It's great for development and I think it fits nicely in some production environments too. The downside of course is that individual files take longer to download, but I don't really consider it an issue in this case. What I'm worried about is the memory usage over time, like if the user never closes the window and more modules keep accumulating in the memory as they explore the app. As far as I know, AMD loaders don't provide any means to unload modules.
The question is, should I really be worried about memory consumption at all in this situation? As an exaggerated example, would the difference in memory usage between 200KiB (on-demand essential modules) and 4000KiB (everything from essentials to practically never used features) of js code be negligible even on a mobile device?
If I should be concerned about memory consumption, what should I do to minimize wasted memory? I can only think of minimizing the amount of code in memory by planning ahead, writing efficient code and unloading unneeded modules. Or as a last resort, by reloading the page at some points.
Bonus question: (How) can I unload modules from curl cache? I've read that in RequireJS it is possible with a little tweaking but I've found nothing for curl.

JavaScript Memory: Heap, Native, WebGL/GPU

Is it possible to see all sources of memory for JavaScript in Chrome? As far as I know the three above are what would be available.
The Heap is your basic GC-able JS objects. "Native Memory" is not part of the Heap .. like DOM, TypedArrays, 2D context ImageData and so on. WebGL too is a source of memory.
I'd like to know how much my code is using. Chrome recently dumped their Native profiler. Heap profiling is simply not sufficient for large memory web-apps.
Is there a way to get useful information on what percent of these memory sources my code is using?
I estimate the native memory by using the chrome task manager (more tools | Task Manager). It shows the private memory, GPU memory and javascript memory. private memory - javascript memory would be an approximation of native memory. But it can't indicate how much memory is allocated by different kinds of resources.
It also shows the GPU memory if the page has a canvas.

Categories

Resources