Why does node.js suddenly use less memory? - javascript

I have a 25MB json file, that I "require" why my app starts up. Initial it seems that the node.js process takes up almost 200MB of memory.
But if I leave it running and come back to it, Activity monitor reports that it is using only 9MB which makes no sense at all! At the very least, it should be a few MB more, since even a simple node.js app that does almost nothing (acting like a server), uses 9MB.
The app seems to work fine - it is a server, that provides search suggestions form a word list of 220,000 words.
Is Activity Monitor wrong ?
Why is it using only 9MB, but initially used ~200MB when the application started up ?

Since it's JavaScript things that are no longer being used are removed via Garbage Collector(GC), freeing memory. Everything (or many things) may have been loaded into memory at the start. Then items that were not longer needed were removed from memory by the GC. Usually generation can take more memory in progress and lose some afterwards, for example temporary data-structures can be used in progress but are not longer needed when the process is done.
It's also possible that items in memory where swapped out and written to the disk temporally (and may be later retrieved), this swapping this is done by your OS and tends to be used more on programs that reserve a lot of memory.

How much memory it takes to load the file depends on a number of factors.
What text encoding is being used to store the file? JavaScript uses UTF-16 internally, so if that's not what's being used on disk, the size may be different. If the file is in UTF-32, for example, then the in-memory UTF-16 version will be smaller unless it's full of astrals. If the file is in UTF-8, then things are reversed: the in-memory version will be larger unless it's full of astrals. But for now, let's just assume that they're about the same size, either because they use the same encoding or the pattern of astrals just happens to make the file sizes more or less the same.
You're right that it takes at least 25MB to load the file (assuming that encodings don't interfere). The semantics of the JSON API being what they are, you need to have the whole file in memory as a string, so the app will take up at least that much memory at that time. That doesn't count whatever the parser needs to run, so you need at least 34MB: 25 for the file, 9 for Node, and then whatever your particular app uses for itself.
But your app doesn't need all of that memory all the time. Depending on how you've written the app, you're probably destroying your references to the file at some point.
Because of the semantics of JSON, there's no way to avoid loading the whole file into memory, which takes 25MB because that's the size of the file. There's also no way to avoid taking up whatever memory the JSON parser needs to do its work and build the object.
But depending on how you've written the app, there probably comes a point when you no longer need that data. Either you exit the function that you used to load the file, or you assign that variable to something else, or any of a number of other possibilities. However it happens, JavaScript reclaims memory that's not being used anymore. This is called garbage collection, and it's popular among so-called "scripting languages" (though other programming languages can use it too).
There's also the question of text representation versus in-memory representation. Strings require about the same amount of space in memory versus on-disk, unless you change the encoding, but Numbers and Booleans are another matter entirely. In JavaScript, all Numbers are 64-bit floating-point numbers, so if most of your numbers on disk are more than four characters long, then the in-memory representation will be smaller, possibly by quite a bit. Note that I said characters, not digits: it's true that digits are characters, but +, -, e, and . are characters too, so -1e0 takes up as twice as much space as -1 when written as text, even though they represent the same value in memory. As another example, 3.14 takes up as much space as 1000 as text (and happen to take up the same amount of space in memory: 64 bits each). But -0.00000001 and 100000000 take up much less space in memory than on disk, because the in-memory representation is smaller. Booleans can be even smaller: different engines store them in different ways, but you could theoretically do it in as little as one bit. That's a far cry from the 8 bytes it takes to store "true", or 10 to store "false".
So if your data is mostly about Numbers and Booleans, then the in-memory representation stands to get a lot smaller. If it's mostly Strings, then not so much.

Related

Why does WebAssembly.Memory take `initial` and `maximum` in units of number of pages, rather than number of bytes

I will almost always be computing my memory needs in the unit of bytes. I wouldn't know how large a "page" is, I had to look it up (MDN says it's 64KB). It's not obvious that the size of a page even should be a fixed (documentable) platform-independent constant. I'm unable to source a reason or even an attestation for the page size being that in MDN's spec link.
I can only think of really bad reasons for it to be this way and I want to figure out whether it really is that bad.
The page size for wasm is documented in the spec: https://webassembly.github.io/spec/core/exec/runtime.html#memory-instances
The value is 64k rather than 640k.
WebAssembly itself is designed to be a compiler target rather than hand written, so normally the toolchain will set this value rather than a human. As it happens, if you use clang, or emscripten then the memory size, as specified on the command line, is actually in bytes rather than pages: https://lld.llvm.org/WebAssembly.html#cmdoption-initial-memory
Wasm memory size can only be chosen in steps of pages (64 KiB), because that generally makes memory bounds checks using hardware virtual memory techniques more feasible.

How do images impact the browser memory relative to JS?

I am curious about how media such as images relate to memory in the browser and the JavaScript heap. Explicitly, how many JS objects would be equivalent to a 2MB image?
Disclaimer - I acknowledge that question is too vague for a precise answer. I will provide some arbitrary constraints but ultimately I’m looking for a general way to look at this problem. Suppose I am building a chat app and considering keeping all the messages ever sent in memory, how might I calculate a tipping point for a reasonable number of messages to keep in memory? How do I factor in that some messages are images?
You may assume
The browser is chrome.
The image is in jpg format and loaded using new Image() in JS and then inserted into DOM.
Each object has about three to five key-value pairs. Key-value pairs are strings or numbers ranging from small ints to strings of about 10 ascii chars.
Suppose the 2MB image is visible on screen.
My intuition tells me that the memory cost of an image of 2MB is way less than the cost of JSObjects * (2MB / size_of_obj_in_bytes), because:
The bytes of the image are not in JS land and therefore somehow compressed and optimised away.
The objects will not exist in silo but references will be created throughout user code creating more memory when passed through functions.
There will be lots of garbage collection overhead.
I don't know for certain that I'm right and I certainly don't know by how much or even how to begin measuring this. And since you can't optimize what you can't measure...
Disclaimer 2 - Premature optimization is the root of all evil etc etc. Just curious about digging deeper into the internals.

Is there a way to get the address of a variable or object in javascript? [duplicate]

Is it possible to find the memory address of a JavaScript variable? The JavaScript code is part of (embedded into) a normal application where JavaScript is used as a front end to C++ and does not run on the browser. The JavaScript implementation used is SpiderMonkey.
If it would be possible at all, it would be very dependent on the javascript engine. The more modern javascript engine compile their code using a just in time compiler and messing with their internal variables would be either bad for performance, or bad for stability.
If the engine allows it, why not make a function call interface to some native code to exchange the variable's values?
It's more or less impossible - Javascript's evaluation strategy is to always use call by value, but in the case of Objects (including arrays) the value passed is a reference to the Object, which is not copied or cloned. If you reassign the Object itself in the function, the original won't be changed, but if you reassign one of the Object's properties, that will affect the original Object.
That said, what are you trying to accomplish? If it's just passing complex data between C++ and Javascript, you could use a JSON library to communicate. Send a JSON object to C++ for processing, and get a JSON object to replace the old one.
You can using a side-channel, but you can't do anything useful with it other than attacking browser security!
The closest to virtual addresses are ArrayBuffers.
If one virtual address within an ArrayBuffer is identified,
the remaining addresses are also known, as both the addresses
of the memory and the array indices are linear.
Although virtual addresses are not themselves physical memory addresses, there are ways to translate virtual address into a physical memory address.
Browser engines allocate ArrayBuffers always page
aligned. The first byte of the ArrayBuffer is therefore at the
beginning of a new physical page and has the least significant
12 bits set to ‘0’.
If a large chunk of memory is allocated, browser engines typically
use mmap to allocate this memory, which is optimized to
allocate 2 MB transparent huge pages (THP) instead of 4 KB
pages.
As these physical pages are mapped on
demand, i.e., as soon as the first access to the page occurs,
iterating over the array indices results in page faults at the
beginning of a new page. The time to resolve a page fault is
significantly higher than a normal memory access. Thus, you can knows the index at which a new 2 MB page starts. At
this array index, the underlying physical page has the 21 least
significant bits set to ‘0’.
This answer is not trying to provide a proof of concept because I don’t have time for this, but I may be able to do so in the future. This answer is an attempt to point the right direction to the person asking the question.
Sources,
http://www.misc0110.net/files/jszero.pdf
https://download.vusec.net/papers/anc_ndss17.pdf
I think it's possible, but you'd have to:
download the node.js source code.
add in your function manually (like returning the memory address of a pointer, etc.)
compile it and use it as your node executable.

Do spaces/comments slow Javascript down?

I was wondering, do whitespaces and comments slow down JavaScript? I'm doing a brute force attack which takes some time (30 seconds). Removing whitespaces does not show a significant growth in speed, but I think the browser just does have to parse more.
So, is it of any use to remove unnecessary whitespaces and comments to speed the whole up?
People usually use minimizers to reduce the SIZE of the script, to improve download speed, rather than to make any difference in speed of parsing the script.
Whitespace and comments will have little effect in how long it takes a browser to execute, as the parser needs to check if it is whitespace, or a comment, but in reality this will be so minute with current computing power, it would be impossible to notice any impact.
SIZE however is still important even with the large bandwidth available in our broadband world.
Whitespaces and comments increase the size of the JavaScript file, which slows down the actual downloading of the file from the server - minification is the process of stripping unnecessary characters from a JavaScript file to make it smaller and easier to download.
However, since you mention a brute force attack, the bottleneck is probably not the download. Try using a profiler to find what slows you down.
There is always a point in minifying, combining and gzipping your assets, to ease server load.
Minifying is the act you refer to, of stripping away unnecessary whitespace and comments, to make the download speed smaller.
Combining will most likely show an even greater increase in page rendering speed; it is the act of merging all your javascript files into one, and all your css files into one (it can also be done for most images, but that taks requires some more work). This is done to reduce the amount of requests the browser has to make towards your server, to be able to display the page.
GZipping is the act of further compressing the data, in a zipped format, to the browsers that indicate that they'll accept such data. This further reduces size, but adds some extra work load at both ends. You're likely to see a net gain from it.
Depending on what environment you're working in, there are different components that'll help you with this, that usually covers all of the above in one go.
The time your code takes to download from the server has a direct effect on how long the page takes to render. JavaScript is blocking, meaning that a JS block will prevent any furhter rendering, until the block has executed entirely. As such, where you put your javascript files (i.e. in which point in the rendering process they'll be requested), how many requests it takes for it to be completely downloaded, and how much data there is to download, will have an impact on your page load, as it appears to the user.
Once the browser has parsed your code, be it javascript, css or html, it'll have created internal representations of the part it needs to keep remembering, and the actual formatting will no longer affect it.
I don't think whitespace in js-code slows down the execution of it. As far as I understand a javascript interpreter strips all comments and redundant whitespace before processing. It can influence download time en thus loading time of a web page however.
Take a look here for a bit of extra information.
It has little to no impact on actual processing speed, however...
Smaller size => less bandwith => less costs => ??? => profit!

How can I get the memory address of a JavaScript variable?

Is it possible to find the memory address of a JavaScript variable? The JavaScript code is part of (embedded into) a normal application where JavaScript is used as a front end to C++ and does not run on the browser. The JavaScript implementation used is SpiderMonkey.
If it would be possible at all, it would be very dependent on the javascript engine. The more modern javascript engine compile their code using a just in time compiler and messing with their internal variables would be either bad for performance, or bad for stability.
If the engine allows it, why not make a function call interface to some native code to exchange the variable's values?
It's more or less impossible - Javascript's evaluation strategy is to always use call by value, but in the case of Objects (including arrays) the value passed is a reference to the Object, which is not copied or cloned. If you reassign the Object itself in the function, the original won't be changed, but if you reassign one of the Object's properties, that will affect the original Object.
That said, what are you trying to accomplish? If it's just passing complex data between C++ and Javascript, you could use a JSON library to communicate. Send a JSON object to C++ for processing, and get a JSON object to replace the old one.
You can using a side-channel, but you can't do anything useful with it other than attacking browser security!
The closest to virtual addresses are ArrayBuffers.
If one virtual address within an ArrayBuffer is identified,
the remaining addresses are also known, as both the addresses
of the memory and the array indices are linear.
Although virtual addresses are not themselves physical memory addresses, there are ways to translate virtual address into a physical memory address.
Browser engines allocate ArrayBuffers always page
aligned. The first byte of the ArrayBuffer is therefore at the
beginning of a new physical page and has the least significant
12 bits set to ‘0’.
If a large chunk of memory is allocated, browser engines typically
use mmap to allocate this memory, which is optimized to
allocate 2 MB transparent huge pages (THP) instead of 4 KB
pages.
As these physical pages are mapped on
demand, i.e., as soon as the first access to the page occurs,
iterating over the array indices results in page faults at the
beginning of a new page. The time to resolve a page fault is
significantly higher than a normal memory access. Thus, you can knows the index at which a new 2 MB page starts. At
this array index, the underlying physical page has the 21 least
significant bits set to ‘0’.
This answer is not trying to provide a proof of concept because I don’t have time for this, but I may be able to do so in the future. This answer is an attempt to point the right direction to the person asking the question.
Sources,
http://www.misc0110.net/files/jszero.pdf
https://download.vusec.net/papers/anc_ndss17.pdf
I think it's possible, but you'd have to:
download the node.js source code.
add in your function manually (like returning the memory address of a pointer, etc.)
compile it and use it as your node executable.

Categories

Resources