Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Is there any resource to know the time complexity of natively defined array and string methods in JavaScript?
I have to do guess work while I am using them to solve algorithm, but I want to be sure about what is the time complexity of those functions?
This question has been answered previously:
Time Complexity for Javascript Methods in V8
In short, it's not specified and the time complexity for common JS methods can differ between browsers.
Worse yet, some methods might not even exist or will behave differently between different browsers and browser versions!
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
In many languages, to find cosine, you use cos(x). But in JavaScript, you must use Math.cos(x). Why doesn't JavaScript spare us the 5 characters in Math., both making it easier to type and easier to read?
I have tried to Google this multiple times, and found no answers. Is there any practical reason for this that I have not yet found?
So far, there are three reasons I can think of:
The creators of JavaScript want to ensure that the math functions do not coincide with other functions users create (Like a function called 'cos()` that calculates, say, cosecant)
The creators of JavaScript thought that Math would make the code more readable
The creators of JavaScript perhaps didn't want any functions that have window as a parent (Though alert and prompt make this unlikely)
To hold the math functions without polluting the global namespace.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was noticing very high usage of memory in Firefox, and I saw pages like Gmail and another heavy web-app I was working on, were >100mb! Looking through and seeing some high "unused-gc-things" in about:memory, I found this bug has been reported here and here and is a common problem without a good solution :(
Is there a good tool for detecting arenas of un-garbage-collectable objects short of compiling a special build of Firefox or other browser? And are there better methodologies of writing web-apps that don't use much memory? I would imagine using ArrayBuffer or asm.js may be more efficient as it has one set memory pool, but that doesn't play well with the usual DOM-based interaction and javascript function-al "class"-based programming of most web-apps.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've tended to just loop over elements selectively to find elements of a specific class, but I'm curious if using getElementsByClassName (for applicable browsers) will increase performance at all.
How does it actually WORK?
I could easily run a test to check performance obviously, but I'm really curious what's going on in there anyway.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a complex codebase with tight couplings between functions and I am not able to write unit tests easily.
Should source code know about testing environment, should it know it's being tested?
To indicate it's being tested or so can be easily via global flag but I have a fear it may cause a bigger mess in the long run.
In short, no.
Your code should be written in such a fashion that it is testing-agnostic. What I mean is that it shouldn't care if it is being tested or not. Because of your 'tight couplings' I would suggest that you do your testing as manually as you can since that would give you the best litmus test of it working as expected.
Also, if your code is implemented well enough it would also be environment agnostic. Whatever environment you test in should be as close to real-world as possible.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Assuming there are no crazy optimizations (I'm looking at you Chrome).
I'm talking about raw, nasty, ain't-broke-don't-fix-it, ie v6 javascript, cost.
The lower limit being:
document.getElementById()
Versus:
document.getElementsByTagName('div') lookup.
getElementById can safely assumed to be O(1) in a modern browser as a hashtable is the perfect data structure for the id=>element mapping.
Without any optimizations any simply query - be it a css selector, an id lookup, a class or tag name lookup - is not worse than O(n) since one iteration over all elements is always enough.
However, in a good browser I'd expect it to have a tagname=>elements mapping, so getElementsByTagName would be O(1) too.