What information can I obtain from the performance.memory object in Chrome?
What do these numbers mean? (are they in kb's or characters)
What can I learn from these numbers?
Example values of performance.memory
MemoryInfo {
jsHeapSizeLimit: 793000000,
usedJSHeapSize: 10000000,
totalJSHeapSize: 31200000
}
What information can I obtain from the performance.memory object in Chrome?
The property names should be pretty descriptive.
What do these numbers mean? (are they in kb's or characters)
The docs state:
The values are quantized as to not expose private information to
attackers.
See the WebKit Patch for how the quantized values are exposed. The
tests in particular help explain how it works.
What can I learn from these numbers?
You can identify problems with memory management. See http://www.html5rocks.com/en/tutorials/memory/effectivemanagement/ for how the performance.memory API was used in gmail.
The related API documentation does not say, but my read judging by the numbers you shared and what I see on my machine is that the values are in bytes.
A quick review of the code to which Bergi linked - regarding the values being quantized - seems to support this - e.g. float sizeOfNextBucket = 10000000.0; // First bucket size is roughly 10M..
The quantized MemoryInfo properties are mostly useful for monitoring vs. determining the precise impact of operations on memory. A comment in the aforementioned linked code explains this well I think:
86 // We quantize the sizes to make it more difficult for an attacker to see precise
87 // impact of operations on memory. The values are used for performance tuning,
88 // and hence don't need to be as refined when the value is large, so we threshold
89 // at a list of exponentially separated buckets.
Basically the values get less precise as they get bigger but are still sufficiently precise for monitoring memory usage.
Related
I am curious about how media such as images relate to memory in the browser and the JavaScript heap. Explicitly, how many JS objects would be equivalent to a 2MB image?
Disclaimer - I acknowledge that question is too vague for a precise answer. I will provide some arbitrary constraints but ultimately I’m looking for a general way to look at this problem. Suppose I am building a chat app and considering keeping all the messages ever sent in memory, how might I calculate a tipping point for a reasonable number of messages to keep in memory? How do I factor in that some messages are images?
You may assume
The browser is chrome.
The image is in jpg format and loaded using new Image() in JS and then inserted into DOM.
Each object has about three to five key-value pairs. Key-value pairs are strings or numbers ranging from small ints to strings of about 10 ascii chars.
Suppose the 2MB image is visible on screen.
My intuition tells me that the memory cost of an image of 2MB is way less than the cost of JSObjects * (2MB / size_of_obj_in_bytes), because:
The bytes of the image are not in JS land and therefore somehow compressed and optimised away.
The objects will not exist in silo but references will be created throughout user code creating more memory when passed through functions.
There will be lots of garbage collection overhead.
I don't know for certain that I'm right and I certainly don't know by how much or even how to begin measuring this. And since you can't optimize what you can't measure...
Disclaimer 2 - Premature optimization is the root of all evil etc etc. Just curious about digging deeper into the internals.
I found many Blockchain implementations on the web, but are they true Blockchain that can scale?
Here we can see that the blockchain is started as an array
var blockchain = [getGenesisBlock()];
Here we can see the same implementation:
constructor() {
this.chain = [this.createGenesis()];
}
This article also recommends it:
constructor(genesisNode) {
this.chain = [this.createGenesisBlock()];
However, are any of these implementations ready to scale?
Technically, according to maerics,
the maximum length of an array according to the ECMA-262 5th Edition
specification is bound by an unsigned 32-bit integer due to the
ToUint32 abstract operation, so the longest possible array could have
232-1 = 4,294,967,295 = 4.29 billion elements.
The size is not a problem. Ethereum has used'only' 7 millions blocks, Bitcoin 'only' 500k, therefore there is enough space for the future. The real problem that I'm thinking is, how long would it take to read the last element of the array and would this be scalable?
In blockchain, the 'Block' structure always needs to read the hash of the last block, therefore I assume that as it scales it takes longer and longer to do it.
What would Bitcoin and/or Ethereum do if their Blockchain array of Blocks doesn't have any more space to store blocks? Would the Blockchain just end there?
The scalability problem comes from the cost of validating transactions and reaching consensus between the nodes. So it's not the cost of accessing a certain block that's problematic here.
Blockchains are not arrays. Conceptually think of it more like a Linked List
There is no limit to the number of blocks (there is one for the number of coins though). The space to store those blocks is also not limited.
To answer the question
Yes, all the implementations given in the question are incorrect/insufficient for a blockchain to work. For some implementations you can refer to the Bitcoin's repository or Ethereum's
I am writing a dictionary app. If a user types an Unicode character I want to check which language the character is.
e.g.
字 - returns ['zh', 'ja', 'ko']
العربية - returns ['ar']
a - returns ['en', 'fr', 'de'] //and many more
й - returns ['ru', 'be', 'bg', 'uk']
I searched and found that it could be done with CLDR https://stackoverflow.com/a/6445024/41948
Or Google API Python - can I detect unicode string language code?
But in my case
Looking up a large charmap db seems cost a lot of storage and memory
Too slow to call an API, besides it requires a network connection
don't need to be very accurate. just about 80% correct ratio is acceptable
simple & fast is the main requirement
it's OK to just cover UCS2 BMP characters.
Any tips?
I need to use this in Python and Javascript. Thanks!
Would it be sufficient to narrow the glyph down to language families? If so, you could create a set of ranges (language -> code range) based on the mapping of BMP like the one shown at http://en.wikipedia.org/wiki/Plane_(Unicode)#Basic_Multilingual_Plane or the Scripts section of the Unicode charts page - http://www.unicode.org/charts/
Reliably determining parent language for glyphs is definitely more complicated because of the number of shared symbols. If you only need 80% accuracy, you could potentially adjust your ranges for certain languages to intentionally include/leave out certain characters if it simplifies your ranges.
Edit: I re-read through the question you referenced CLDR from and the first answer regarding code -> language mapping. I think that's definitely out of the question but the reverse seems feasible if a bit computationally expensive. With clever data structuring, you could identify language families and then drill down to the actual language ranges from there, reducing traversals through irrelevant language -> range pairs.
If the number of languages is relatively small (or the number you care about is fairly small), you could use a Bloom filter for each language. Bloom filters let you do very cheap membership tests (which can have false positives) without having to store all of the members (in this case the code points) in memory. Then you build your result set by checking the code point against each language's preconstructed filter. It's tuneable - if you get too many false positives, you can use a larger size filter, at the cost of memory.
There are Bloom filter implementations for Python and Javascript. (Hey - I've met the guy who did this one! http://www.jasondavies.com/bloomfilter/)
Bloom filters: http://en.m.wikipedia.org/wiki/Bloom_filter
Doing a bit more reading, if you only need the BMP (65,536 code points), you could just store a straight bit set for each language. Or a 2D bitarray for language X code point.
How many languages do you want to consider?
I'm starting to write a Chess program in JavaScript and possibly some Node.JS if I find the need to involve the server in the Chess AI logic, which is still plausible at least in my possibly ignorant opinion. My question is simple enough: Is the client-side FileSystem API for JavaScript a reasonable way to cache off minimax results for future reference, or is the resulting data just way too much to store in any one place? My idea was that it could be used as a way to allow the AI to adapt to the user and "learn" by being able to access previous decisions rather than manually re-determining them every time. Is this a reasonable plan or am I underestimating the memory usage this would need? If your answer is that this is plausible, some tips on the most efficient method for storing the data in this manner would be nice too.
I have written Chess Engines before in C++, but no Javascript.
What you describe is usually solved by a transposition table. You calculate a hash key that identifies the position and store additional data with it.
See:
https://www.chessprogramming.org/Transposition_Table
https://www.chessprogramming.org/Zobrist_Hashing
Web storage provides per origin:
2.5 MB for Google Chrome
5 MB for Mozilla Firefox
10 MB for Internet Explorer
Each entry usually holds:
Zobrist Hash Key: 8 byte
Best Move: 2 byte
Depth: 1 byte
Score: 2 byte
Type of score (exact, upper bound, lower bound): 1 byte
= 16 byte
So e.g. Google Chrome can hold 160k entries. Usually for a chess position analysis you use over 1 GB of memory for the transposition table. Anyway, for a javascript engine I think the 2.5 MB is a good compromise.
To make sure that the javascript engine uses the optimal storage I advise you to convert the data to some sort of binary representation. Then I would index the localStorage by Zobrist Hash Key and store all the other information associated with it.
I found some interesting post about memory usage and CPU usage here on Stack Overflow, however none of them had a direct approach to the apparently simple question:
As a generic strategy in a JavaScript app, is it better in terms of performances to use memory (storing data) or CPU (recalculating data each time)?
I refer to javascript usage in common browsers environment (FF, Chrome, IE>8)
Does anybody have a more direct and documented answer to this?
--- EDIT ---
Ok, I understand the question is very generic. I try to reduce the "scope".
Reading your answer I realized that the real question is: "how to undestand the memory limit under which my javascript code still has good performances?".
Environment: common browsers environment (FF, Chrome, IE>8)
Functions I use are not very complex math functions, but can produce quite a huge amount of data (300-400kb) and I wanted to understand if it was better to recalculate them every time or just store results in variables.
Vaguely related - JS in browsers is extremely memory hungry when you start using large objects / arrays. If you think about binary data produced by canvas elements, or other rich media APIs, then clearly you do not want to be storing this data in traditional ways - disregarding performance issues, which are also important.
From the MDN article talking about JS Typed Arrays:
As web applications become more and more powerful, adding features such as audio and video manipulation, access to raw data using WebSockets, and so forth, it has become clear that there are times when it would be helpful for JavaScript code to be able to quickly and easily manipulate raw binary data.
Here's a JS Perf comparison of arrays, and another looking at canvas in particular, so you can get some direct examples on how they work. Hope this is useful.
It's just another variation on the size/performance tradeoff. Storing values increases size, recalculating decreases performance.
In some cases, the calculation is complex, and the memory usage is small. This is particularly true for maths functions.
In other cases, the memory needed would be huge, and calculations are simple. This is particularly true when the output would be a large data structure, and you can calculate an element in the structure easily.
Other factors you will need to take into account is what resources are available. If you have very limited memory then you may have no choice, and if it is a background process then perhaps using lots of memory is not desirable. If the calculation needs to be done very often then you are more likely to store the value than if it's done once a month...
There are a lot of factors in the tradeoff, and so there is no "generic" answer, only a set of guidelines you can follow as each case arises.