BrwoserList Queries Result Difference - javascript

Here are the Results of the below 2 queries, which I ran on browserlist site in order to check the browser % coverage worldwide.
(a) >0.1%, last 2 versions, not dead, supports ES-6 module
(b) >0.1%, last 2 versions, not dead, supports ES-6 module, last 2 node major versions, Firefox ESR, since 2000-01-15
THE RESULTS:-
(a) 96%
(b) 97.7%
Please refer to the screenshots for query results.
The question:- How does the second query cover more browsers than the first one?
As ES-6 & node major versions were released way after the year 2000.
What I initially expected it to be, that the queries works on true/false basis.
That is to come with the result that requires all conditions to be true for comma seperated values, but if that would have been the case, the result % for 2nd query should be way less. Instead it's more.
Please explain how the results come up to be like these.
Thanks in advance.

Related

SQL on top of apache arrow in-browser?

I have data that is stored on a client's browser in-memory. For example, let's say the dataset is as follows:
"name" (string), "age" (int32), "isAdult" (bool)
"Tom" , 29 1
"Tom" , 14 0
"Dina" , 20 1
I would like to run non-trivial SQL statements on this data in javascript, such as:
SELECT name, GROUP_CONCAT(age ORDER BY age) ages
FROM arrowData a1 JOIN arrowData a2 USING (name)
WHERE a1.isAdult != a2.isAdult
And I would get:
"name" (string), "ages" (string)
"Tom" "14,29"
The data that I have in javascript is stored in as apache Arrow (also used in connection with Perspective), and I'd like to execute SQL on that apache Arrow data as well. As a last resort, I think it would be possible to use sqllite in wasm, but I'm hoping there might be a simpler way where I can query the Arrow data directly, without having to move it all into a sqllite store in order to execute a query on it.
Are there any ways to do this?
It is good stuff what you are looking for. :) Sadly thanks to some trends in ~2010 as far as I know there is no actively maintained and supported API for this. But...
If you want to have full ANSI SQL on the client side in memory and you are willing to populate the database you could run the mentioned SQLite. Maybe this the only fulfilling option for you (if you could not leave some of the requirements).
If you could allow the luxury to copy data you could check out the AlaSQL project it does support join-s and some of the ANSI SQL features, but it is not complete and it contains known disruptive bugs:
Please be aware that AlaSQL has bugs. Beside having some bugs, there
are a number of limitations:
AlaSQL has a (long) list of keywords that must be escaped if used for
column names. When selecting a field named key please write SELECT
key FROM ... instead. This is also the case for words like value,
read, count, by, top, path, deleted, work and offset.
Please consult the full list of keywords.
It is OK to SELECT 1000000 records or to JOIN two tables with 10000
records in each (You can use streaming functions to work with longer
datasources - see test/test143.js) but be aware that the workload is
multiplied so SELECTing from more than 8 tables with just 100 rows in
each will show bad performance. This is one of our top priorities to
make better.
Limited functionality for transactions (supports only for
localStorage) - Sorry, transactions are limited, because AlaSQL
switched to more complex approach for handling PRIMARY KEYs / FOREIGN
KEYs. Transactions will be fully turned on again in a future version.
A (FULL) OUTER JOIN and RIGHT JOIN of more than 2 tables will not
produce expected results. INNER JOIN and LEFT JOIN are OK.
Please use aliases when you want fields with the same name from
different tables (SELECT a.id AS a_id, b.id AS b_id FROM ?).
At the moment AlaSQL does not work with JSZip 3.0.0 - please use
version 2.x.
JOINing a sub-SELECT does not work. Please use a with structure
(Example here) or fetch the sub-SELECT to a variable and pass it as an
argument (Example here).
AlaSQL uses the FileSaver.js library for saving files locally from the
browser. Please be aware that it does not save files in Safari 8.0.
There are probably many others. Please help us fix them by submitting
an issue. Thank you!
We planned to use it in one project, but there were more problems than solutions (for us) while introducing the project to our stack. So we backed out of from it. So I do not have production experience with this piece software...
At older times I hoped that Google Gears will support something like the desired function but partly it got replaced by HTML5 client side storage and sadly the project got discontinued.
The HTML5 WebSQL Database would have been perfect for your use-case, but it is sadly depricated. Tho most (?) browsers still support it in 2019. You can check some examples here. If you can allow yourself to build on a depricated API this could be the solution, but I do not really recommend it as it is not guaranteed that it will work...
When our project run having the same problems we ended up having to use the localStorage and program every "SELECT" manually, which of course was not at all ANSI SQL like...
If we roll back to the original problem "[SQL] query the Arrow data directly" I have no adapter in mind to use it as SQL... These kind of operations still tend to be on the server side and with the wasm SQLite I think those are the options.
You can use Alasql to do some of what you want, but it does not support grouping.
var data = [
{
name: 'Tom',
age: 29,
isAdult: 1
},
{
name: 'Tom',
age: 14,
isAdult: 0
},
{
name: 'Dina',
age: 20,
isAdult: 1
}
];
var res = alasql('SELECT name, age from ? a1 JOIN ? a2 WHERE a1.isAdult != a2.isAdult AND a1.name = a2.name', [data, data]);
document.getElementById('result').textContent = JSON.stringify(res);
<script src="https://cdn.jsdelivr.net/alasql/0.2/alasql.min.js"></script>
<span id="result"></span>
There is DuckDB Wasm now which can run SQL on arrow tables.
https://www.npmjs.com/package/#duckdb/duckdb-wasm
https://duckdb.org/2021/10/29/duckdb-wasm.html
DuckDB-Wasm is an in-process analytical SQL database for the browser.
It is powered by WebAssembly, speaks Arrow fluently, reads Parquet,
CSV and JSON files backed by Filesystem APIs or HTTP requests and has
been tested with Chrome, Firefox, Safari and Node.js.

Differences in currency formatting using Number.toLocaleString()

I've been looking into locale aware number formatting for javascript and found that Number.toLocaleString and by extension Intl.NumberFormat appear to be good solutions for this problem.
In particular I'd prefer building on top of already implemented abstractions for locale specific formatting rather than reinventing the wheel and coming up with one more solution to an already solved problem.
So I've called Number.toLocaleString on some different javascript environments and found that currency formatting seems to have changed:
(5).toLocaleString('fr-CH', {currency: 'CHF', style: 'currency'});
// Node v10.15.1: 'CHF 5.00'
// Node v12.1.0: '5.00 CHF'
// Firefox 66.0.2: '5.00 CHF'
// Chrome 73.0.…: '5.00 CHF'
// Safari 12.0.3: '5.00 CHF'
// IE 11: '5.00 fr.'
IE 11 is different than the rest, but it doesn't surprise me given its age.
What surprises me is that the formatting for CHF in fr-CH seems to have changed between node versions 10 and 12.
For comparison I had a look at the glibc LC_MONETARY settings for fr_CH and found that it seems to place the CHF before the amount at least about 1997. This makes it particularly confusing that the position of CHF seems to be different for most current browsers.
I would like to know and understand:
Why are the positions of the CHF different in these cases?
I know that this can depend on the available system locales or the browser. But the change between node versions seems to indicate a more recent and voluntary change to me.
Is there a correct way to place the CHF or are both choices acceptable for CH, or more specifically fr-CH?
For this it would be beautiful to have an actual source like a paper or research database rather than hearsay or anecdotes.
Update (2019-05-16):
In reaction to my partial answer I'd like to specify:
The formatting decision for fr_CH is given as currencyFormat{"#,##0.00 ¤ ;-#,##0.00 ¤"} in commit 3bfe134 but I'm still missing a source for the decision and would love to know about it.
So I've checked out the v8 source to see if I can find where the behavior of Number.toLocaleString is defined.
In builtins-number.cc I found the BUILTIN(NumberPrototypeToLocaleString){…} which uses Intl::NumberToLocaleString(…).
This led me to intl-objects.cc which implements the Intl::NumberToLocaleString using an icu::number::LocalizedNumberFormatter.
Since v8 uses icu I checked out the source to continue my search.
My first tries to find the source of the number formatting led me to look at decimfmt and numfmt first, but I somehow kept loosing the trace.
Then it dawned on me that it would likely make sense to keep the format definitions somewhat separate from the rest of the code. By looking around the website and the source more I finally found icu4c/source/data/locales/de_CH.txt and icu4c/source/data/locales/fr_CH.txt.
de_CH.txt has currencyFormat{"¤ #,##0.00;¤-#,##0.00"}.
fr_CH.txt has currencyFormat{"#,##0.00 ¤ ;-#,##0.00 ¤"}.
Now using git I found the commit that first introduced the currencyFormat for fr_CH (3bfe134) 19 months ago.
This is plausible to be between node v10 and v12.
I can also see that it it would make sense to fallback on de_CH before the curreencyFormat was added to fr_CH and therefore see that the format would change the way it did.
The commit mentions CLDR 32 alpha, and I found the CLDR charts version 32.
However I'm currently not able to figure out where the chart is located that defines the currencyFormat for fr_CH.
I feel that by finding the change to the fr_CH currencyFormat I found and understand the change that leads to the change of behavior between different node versions.
As of now I don't understand why glibc and icu have differences here, but that is something I can ask around in the context of the specific projects for.
I'm under the impression that I'm still missing the specific decision or data-point which led to the currencyFormat implementation - if I find it I shall add it here and be satisfied.
Update 2019-05-18:
The CLDR 32 data can be fond in the download section under cldr.unicode.org.
From there I could download the cldr-common-32.zip which included the file common/main/fr_CH.xml in which the currency format is defined like this:
<currencyFormats numberSystem="latn">
<currencyFormatLength>
<currencyFormat type="standard">
<pattern draft="contributed">#,##0.00 ¤ ;-#,##0.00 ¤</pattern>
</currencyFormat>
</currencyFormatLength>
</currencyFormats>
Via cldr.unicode.org I also found the survey-tool which is used to decide on these matters and to document the outcomes of such decisions.
In the Number Formatting section for fr_CH I could then see the decisions source:
Update 2019-05-21:
So out of curiosity I've asked about this on the libc-locales list as well as on the closest ticket I could find on the unicode-org ticket system.
This prompted me to investigate further and when researching this with a friend we stumbled upon the cldr repo on Github which is focused on CLDR data rather than having CLDR related data in it like icu has.
We found that commit c5e7787 introduced the first change that led to the CHF being placed after the number rather than before it and trough that commit became better aware of two tickets.
These tickets are CLDR-9370 and CLDR-10755, the second of which is a follow up that clears up some formatting.
While on the surface CLDR-9370 seems to mostly discuss the decimal separator, the currency symbol placement is discussed as well.
One of the sources given is a typography guide (pdf) published by the CERN which gives detailed instructions on the ways to write numbers.
For CHF the guide notes:
Using google translate this translates to:
Writing sums of money
The number is written in three-digit increments separated by a
non-breaking space (no point or apostrophe of separation), and is
followed (and never preceded) by the indication of the currency is
long or abbreviated. For name abbreviations currency, we use the ISO
code.

IE Performance for large number of items

I'm comparing performance between a few frameworks (namely ReactJS and AngularJS) versus a "vanilla HTML + JS". During this I came across absolutely abysmal performance with Internet Explorer (I've tested in IE9 and IE11 and they both exhibit performance issues but differently).
The original code is an HTML file but I've moved it to JSFiddle for the sake of sharing it here. If you'd like, I can post it as a GitHub Gist, instead.
Anyways, goal is to render a table with 5,000 items in it (representing files and folders). On my test machine, IE11 takes around 30 seconds for the initial rendering while Chrome/Safari/Firefox are in the 1.5–3 second range. If I look at just how long it takes to generate the HTML string (so not even DOM manipulation), that alone is about 15 seconds on IE11 plus another 15 for actual rendering.
Any thoughts as to what I'm doing wrong? Make sure you change the sampleSize from 100 to 5,000 once you want to see the actual results:
var sampleSize = 100;
to
var sampleSize = 5000;
Note: here's what I've already done to improve performance:
Changed the string concatenation of each row to using an array with a .join('') at the end which is a known performance issue with IE
Only a single DOM access with $(tblBody).html(nodes.join('')); rather than append one row at a time
The above two enhancements brought the initial rendering from 36s down to 30s.
Note 2: the code is that that f*ed up since it's still faster than either my ReactJS- or AngularJS-based solutions. So the main question is: what in the world is IE doing?

Comparing large strings in JavaScript with a hash

I have a form with a textarea that can contain large amounts of content (say, articles for a blog) edited using one of a number of third party rich text editors. I'm trying to implement something like an autosave feature, which should submit the content through ajax if it's changed. However, I have to work around the fact that some of the editors I have as options don't support an "isdirty" flag, or an "onchange" event which I can use to see if the content has changed since the last save.
So, as a workaround, what I'd like to do is keep a copy of the content in a variable (let's call it lastSaveContent), as of the last save, and compare it with the current text when the "autosave" function fires (on a timer) to see if it's different. However, I'm worried about how much memory that could take up with very large documents.
Would it be more efficient to store some sort of hash in the lastSaveContent variable, instead of the entire string, and then compare the hash values? If so, can you recommend a good javascript library/jquery plugin that implements an appropriate hash for this requirement?
In short, you're better off just storing and comparing the two strings.
Computing a proper hash is not cheap. For example, check out the pseudo code or an actual JavaScript implementation for computing the MD5 hash of a string. Furthermore, all proper hash implementations will require enumerating the characters of the string anyway.
Furthermore, in the context of modern computing, a string has to be really, really long before comparing it against another string is slow. What you're doing here is effectively a micro-optimization. Memory won't be an issue, nor will the CPU cycles to compare the two strings.
As with all cases of optimizing: check that this is actually a problem before you solve it. In a quick test I did, computing and comparing 2 MD5 sums took 382ms. Comparing the two strings directly took 0ms. This was using a string that was 10000 words long. See http://jsfiddle.net/DjM8S.
If you really see this as an issue, I would also strongly consider using a poor-mans comparison; and just comparing the length of the 2 strings, to see if they have changed or not, rather than actual string comparisons.
..
An MD5 hash is often used to verify the integrity of a file or document; it should work for your purposes. Here's a good article on generating an MD5 hash in Javascript.
I made a JSperf rev that might be useful here for performance measuring. Please add different revisions and different types of checks to the ones I made!
http://jsperf.com/long-string-comparison/2
I found two major results
When strings are identical performance is murdered; from ~9000000 ops/s to ~250 ops/sec (chrome)
The 64bit version of IE9 is much slower on my PC, results from the same tests:
+------------+------------+
| IE9 64bit | IE9 32bit |
+------------+------------+
| 4,270,414 | 8,667,472 |
| 2,270,234 | 8,682,461 |
+------------+------------+
Sadly, jsperf logged both results as simply "IE 9".
Even a precursory look at JS MD5 performance tells me that it is very, very slow (at least for large strings, see http://jsperf.com/md5-shootout/18 - peaks at 70 ops/sec). I would want to go as far as to try AJAXing the hash calculation or the comparison to the backend but I don't have time to test, sorry!

Javascript Browser Quirks - array.Length

Code:
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Unusual Array Lengths!</title>
<script type="text/javascript">
var arrayList = new Array();
arrayList = [1, 2, 3, 4, 5, ];
alert(arrayList.length);
</script>
</head>
<body>
</body>
</html>
Notice the extra comma in the array declaration.
The code above gives different outputs for various browsers:
Safari: 5
Firefox: 5
IE: 6
The extra comma in the array is being ignored by Safari and FF while IE treats it as another object in the array.
On some search, I have found mixed opinions about which answer is correct. Most people say that IE is correct but then Safari is also doing the same thing as Firefox. I haven't tested this on other browsers like Opera but I assume that there are discrepancies.
My questions:
i. Which one of these is correct?
Edit: By general consensus (and ECMAScript guidelines) we assume that IE is again at fault.
ii. Are there any other such Javascript browser quirks that I should be wary of?
Edit: Yes, there are loads of Javascript quirks. www.quirksmode.org is a good resource for the same.
iii. How do I avoid errors such as these?
Edit: Use JSLint to validate your javascript. Or, use some external libraries. Or, sanitize your code.
Thanks to DamienB, JasonBunting, John and Konrad Rudolph for their inputs.
It seems to me that the Firefox behavior is correct. What is the value of the 6th value in IE (sorry I don't have it handy to test). Since there is no actual value provided, I imagine it's filling it with something like 'null' which certainly doesn't seem to be what you intended to have happen when you created the array.
At the end of the day though, it doesn't really matter which is "correct" since the reality is that either you are targeting only one browser, in which case you can ignore what the others do, or you are targeting multiple browsers in which case your code needs to work on all of them. In this case the obvious solution is to never include the dangling comma in an array initializer.
If you have problems avoiding it (e.g. for some reason you have developed a (bad, imho) habit of including it) and other problems like this, then something like JSLint might help.
I was intrigued so I looked it up in the definition of ECMAScript 262 ed. 3 which is the basis of JavaScript 1.8. The relevant definition is found in section 11.1.4 and unfortunately is not very clear. The section explicitly states that elisions (= omissions) at the beginning or in the middle don't define an element but do contribute to the overall length.
There is no explicit statements about redundant commas at the end of the initializer but by omission I conclude that the above statement implies that they do not contribute to the overall length so I conclude that MSIE is wrong.
The relevant paragraph reads as follows:
Array elements may be elided at the beginning, middle or end of the element list. Whenever a comma in the element list is not preceded by an Assignment Expression (i.e., a comma at the beginning or after another comma), the missing array element contributes to the length of the Array and increases the index of subsequent elements. Elided array elements are not defined.
"3" for those cases, I usually put in my scripts
if(!arrayList[arrayList.length -1]) arrayList.pop();
You could make a utility function out of that.
First off, Konrad is right to quote the spec, as that is what defines the language and answers your first question.
To answer your other questions:
Are there any other such Javascript browser quirks that I should be wary of?
Oh, too many to list here! Try the QuirksMode website for a good place to find nearly everything known.
How do I avoid errors such as these?
The best way is to use a library that abstracts these problems away for you so that you can get down to worrying about the logic of the application. Although a bit esoteric, I prefer and recommend MochiKit.
Which one of these is correct?
Opera also returns 5. That means IE is outnumbered and majority rules as far as what you should expect.
Ecma 262 edition 5.1 section 11.1.4 array initializer states that a comma at the end if the array does not contribute to the length if the array. "if an element is elided at the end of the array it does not contribute to the length of the array"
That means [ "x", ] is perfectly legal javascript and should return an array of length 1
#John: The value of arrayList[5] comes out to be 'undefined'.
Yes, there should never be a dangling comma in declarations. Actually, I was just going through someone else's long long javascript code which somehow was not working correctly in different browers. Turned out that the dangling comma was the culprit that has accidently been typed in! :)

Categories

Resources