I'm working on a MongoDB database and so far have stored some information as Numbers instead of Strings because I assumed that would be more efficient. For example, I store countries following ISO 3166-1 numeric and sex following ISO/IEC 5218. But so far I have not found a similar standard for languages, ISO 639 does not appear to have a matching list of numeric codes.
What would be the right way to do this? Should I just use the String codes?
Thanks!
If you're a fan of the numbers, you can use country calling codes, although they "only" represent the ITU members (193 countries according to Wikipedia). But hey, they have Somalia and Palestine, so that's a good hint about how global this is.
However, storing everything in an encoded format (numbers here) implies a decoding step on the fly when any piece of data is requested (with translation tables stored in RAM instead of DB's ROM). Probably on the server whose CPU is precious, but you might have deported the issue on the client, overworking the precious, time-critical server-client link in the process.
So, back in the 90s, when a 40MB HDD was expensive, that might have been interesting. Today, the cost of storing data vs. the cost of processing data is not on the same side of 1... Not counting the time it takes you to think and implement the transformations. All being said "IMHO", I think this level of efficiency actually kills efficiency. ;)
EDIT: Oops, just realized I misthought (does that verb even exist?) the country/language issue. Countries you have sorted out already, my bad. I know no numbered list of languages. However, the second part of the post might still be relevant...
If you are after raw performance and/or want to achieve really small data sizes, I would suggest you use either the three-letter (higher granularity) or the two-letter (lower granularity) codes from IOC ISO-639-1/2.
To my knowledge, there's no helper or anything for this standard built into any programming language that I know, so you'd need to build your own translator (code<->full name) which, however, should be trivial.
And as others already mentioned, you have to assess the cost involved with this (e.g. not being able to simply look at the data and understand it right away anymore) for yourself. I personally do recommend keeping data sizes small since BSON parsing and string operations are horribly expensive compared to dealing with numbers (or shorter strings for that matter). When dealing with small data sets, this won't make a noticeable difference. If, however, you need to churn through millions of documents or more optimizations like this can become mission critical.
I am considering using a JS MD5 implementation.
But I noticed that there are only a few tests. Is there a good way of verifying that implementation is correct?
I know I can try it with a few different values and see if it works, but that only means it is correct for some inputs. I would like to see if it is correct for all inputs.
The corresponding RFC has a good description of the algorithm, an example implementation in C, and a handful of test values at the end. All three together let you make a good guess about the quality of the examined implementation and that's all you can get: a good guess.
Testing an applications with an infinite or at least a very large input set as a black box is hard, impossible even in most cases. So you have to check if the code implements the algorithm correctly. The algorithm is described in RFC-3121 (linked to above). This description is sufficient for an implementation. The algorithm itself is well known (in the scientific sense, i.e.: many papers have been written about it and many flaws have been found) and simple enough to skip the formal part, just inspect the implementation.
Problems to expect with MD5 in JavaScript: input of one or more zero bytes (you can check the one and two bytes long inputs thoroughly), endianess (should be no problem but easy to check) and the problem of the unsigned integer used for bit-manipulation in JavaScript (">>" vs. ">>>" but also easy to check for). I would also test with a handful of data with all bits set.
The algorithm needs padding, too, you can check it with all possible input of length shorter than the limit.
Oh, and for all of you dismissing the MD5-hash: it still has its uses as a fast non-cryptographic hash with a low collision-rate and a good mixing (some call the effect of the mixing "avalanche", one bit change in the input changes many bits in the output). I still use it for larger, non-cryptographic Bloom-filters. Yes, one should use a special hash fitting the expected input but constructing such a hash function is a pain in the part of the body Nature gave us to sit on.
Due to the reasons outlined in this question I am building my own client side search engine rather than using the ydn-full-text library which is based on fullproof. What it boils down to is that fullproof spawns "too freaking many records" in the order of 300.000 records whilst (after stemming) there are only about 7700 unique words. So my 'theory' is that fullproof is based on traditional assumptions which only apply to the server side:
Huge indices are fine
Processor power is expensive
(and the assumption of dealing with longer records which is just applicable to my case as my records are on average 24 words only1)
Whereas on the client side:
Huge indices take ages to populate
Processing power is still limited, but relatively cheaper than on the server side
Based on these assumptions I started of with an elementary inverted index (giving just 7700 records as IndexedDB is a document/nosql database). This inverted index has been stemmed using the Lancaster stemmer (most aggressive one of the two or three popular ones) and during a search I would retrieve the index for each of the words, assign a score based on overlap of the different indices and on similarity of typed word vs original (Jaro-Winkler distance).
Problem of this approach:
Combination of "popular_word + popular_word" is extremely expensive
So, finally getting to my question: How can I alleviate the above problem with a minimal growth of the index? I do understand that my approach will be CPU intensive, but as a traditional full text search index seems unusably big this seems to be the only reasonable road to go down on. (Pointing me to good resources or works is also appreciated)
1 This is a more or less artificial splitting of unstructured texts into small segments, however this artificial splitting is standardized in the relevant field so has been used here as well. I have not studied the effect on the index size of keeping these 'snippets' together and throwing huge chunks of texts at fullproof. I assume that this would not make a huge difference, but if I am mistaken then please do point this out.
This is a great question, thanks for bringing some quality to the IndexedDB tag.
While this answer isn't quite production ready, I wanted to let you know that if you launch Chrome with --enable-experimental-web-platform-features then there should be a couple features available that might help you achieve what you're looking to do.
IDBObjectStore.openKeyCursor() - value-free cursors, in case you can get away with the stem only
IDBCursor.continuePrimaryKey(key, primaryKey) - allows you to skip over items with the same key
I was informed of these via an IDB developer on the Chrome team and while I've yet to experiment with them myself this seems like the perfect use case.
My thought is that if you approach this problem with two different indexes on the same column, you might be able to get that join-like behavior you're looking for without bloating your stores with gratuitous indexes.
While consecutive writes are pretty terrible in IDB, reads are great. Good performance across 7700 entries should be quite tenable.
The ADsafe subset of Javascript prohibits the use of certain things that are not safe for guest code to have access to, such as eval, window, this, with, and so on.
For some reason, it also prohibits the Date object and Math.random:
Date and Math.random
Access to these sources of non-determinism is restricted in order to make it easier to determine how widgets behave.
I still don't understand how using Date or Math.random will accomodate malevolence.
Can you come up with a code example where using either Date or Math.random is necessary to do something evil?
According to a slideshow posted by Douglas Crockford:
ADsafe does not allow access to Date or random
This is to allow human evaluation of ad content with confidence that
behavior will not change in the future. This is for ad quality and
contractual compliance, not for security.
I don't think anyone would consider them evil per se. However the crucial part of that quote is:
easier to determine how widgets behave
Obviously Math.random() introduces indeterminism so you can never be sure how the code would behave upon each run.
What is not obvious is that Date brings similar indeterminism. If your code is somehow dependant on current date it will (again obviously) work differently in some conditions.
I guess it's not surprising that these two methods/objects are non-functional, in other words each run may return different result irrespective to arguments.
In general there are some ways to fight with this indeterminism. Storing initial random seed to reproduce the exact same series of random numbers (not possible in JavaScript) and supplying client code with sort of TimeProvider abstraction rather than letting it create Dates everywhere.
According to their website, they don't include Date or Math.random to make it easier to determine how third party code will behave. The problem here is Math.random (using Date you can make a psuedo-random number as well)- they want to know how third party code will behave and can't know that if the third party code is allowed access to random numbers.
By themselves, Date and Math.random shouldn't pose security threats.
At a minimum they allow you to write loops that can not be shown to be non-terminating, but may run for a very long time.
The quote you exhibit seem to suggest that a certain amount of static analysis is being done (or is at least contemplated), and these features make it much harder. Mind you these restrictions aren't enough to actually prevent you from writing difficult-to-statically-analyze code.
I agree with you that it's a strange limitation.
The justification that using date or random would make difficult to predict widget behavior is of course nonsense. For example implement a simple counter, compute the sha-1 of the current number and then act depending on the result. I don't think it's any easier to predict what the widget will do in the long term compared to a random or date... short of running it forever.
The history of math has shown that trying to classify functions on how they compute their value is a path that leads nowhere... the only sensible solution is classifying them depending on the actual results (black box approach).
I have some code that sorts table columns by object properties. It occurred to me that in Japanese or Chinese (non-alphabetical languages), the strings that are sent to the sort function would be compared the way an alphabetical language would.
Take for example a list of Japanese surnames:
寿拘 (Suzuki)
松坂 (Matsuzaka)
松井 (Matsui)
山田 (Yamada)
藤本 (Fujimoto)
When I sort the above list via Javascript, the result is:
寿拘 (Suzuki)
山田 (Yamada)
松井 (Matsui)
松坂 (Matsuzaka)
藤本 (Fujimoto)
This is different from the ordering of the Japanese syllabary, which would arrange the list phonetically (the way a Japanese dictionary would):
寿拘 (Suzuki)
藤本 (Fujimoto)
松井 (Matsui)
松坂 (Matsuzaka)
山田 (Yamada)
What I want to know is:
Does one double-byte character really get compared against the other in a sort function?
What really goes on in such a sort?
(Extra credit) Does the result of such a sort mean anything at all? Does the concept of sorting really work in Asian (and other) languages? If so, what does it mean and what should one strive for in creating a compare function for those languages?
ADDENDUM TO SUMMARIZE ANSWERS AND DRAW CONCLUSIONS:
First, thanks to all who contributed to the discussion. This has been very informative and helpful. Special shout-outs to bobince, Lie Ryan, Gumbo, Jeffrey Zheng, and Larry K, for their in-depth and thoughtful analyses. I awarded the check mark to Larry K for pointing me toward a solution my question failed to foresee, but I up-ticked every answer I found useful.
The consensus appears to be that:
Chinese and Japanese character strings are sorted by Unicode code points, and their ordering may be predicated on a rationale that may be in some way intelligible to knowledgeable readers but is not likely to be of much practical value in helping users to find the information they're seeking.
The kind of compare function that would be required to make a sort semantically or phonetically useful is far too cumbersome to consider pursuing, especially since the results would probably be less than satisfactory, and in any case the comparison algorithms would have to be changed for each language. Best just to allow the sort to proceed without even attempting a compare function.
I was probably asking the wrong question here. That is, I was thinking too much "inside the box" without considering that the real question is not how do I make sorting useful in these languages, but how do I provide the user with a useful way of finding items in a list. Westerners automatically think of sorting for this purpose, and I was guilty of that. Larry K pointed me to a Wikipedia article that suggests a filtering function might be more useful for Asian readers. This is what I plan to pursue, as it's at least as fast as sorting, client-side. I will keep the column sorting because it's well understood in Western languages, and because speakers of any language would find the sorting of dates and other numerical-based data types useful. But I will also add that filtering mechanism, which would be useful in long lists for any language.
Does one double-byte character really get compared against the other in a sort function?
The native String type in JavaScript is based on UTF-16 code units, and that's what gets compared. For characters in the Basic Multilingual Plane (which all these are), this is the same as Unicode code points.
The term ‘double-byte’ as in encodings like Shift-JIS has no meaning in a web context: DOM and JavaScript strings are natively Unicode, the original bytes in the encoded page received by the browser are long gone.
Does the result of such a sort mean anything at all?
Little. Unicode code points do not claim to offer any particular ordering... for one, because there is no globally-accepted ordering. Even for the most basic case of ASCII Latin characters, languages disagree (eg. on whether v and w are the same letter, or whether the uppercase of i is I or İ). And CJK gets much gnarlier than that.
The main Unicode CJK Unified Ideographs block happens to be ordered by radical and number of strokes (Kangxi dictionary order), which may be vaguely useful. But use characters from any of the other CJK extension blocks, or mix in some kana, or romaji, and there will be no meaningful ordering between them.
The Unicode Consortium do attempt to define some general ordering rules, but it's complex and not generally attempted at a language level. Systems that really need language-sensitive sorting abilities (eg. OSes, databases) tend to have their own collation schemes.
This is different from the ordering of the Japanese syllabary
Yes. Above and beyond collation issues in general, it's a massively difficult task to handle kanji accurately by syllable, because you have to guess at the pronunciation. JavaScript can't realistically know that by ‘藤本’ you mean ‘Fujimoto’ and not ‘touhon’; this sort of thing requires in-depth built-in dictionaries and still-unreliable heuristics... not the sort of thing you want to build in to a programming language.
You could implement the Unicode Collation Algorithm in Javascript if you want something better than the default JS sort for strings. Might improve some things. Though as the Unicode doc states:
Collation is not uniform; it varies
according to language and culture:
Germans, French and Swedes sort the
same characters differently. It may
also vary by specific application:
even within the same language,
dictionaries may sort differently than
phonebooks or book indices. For
non-alphabetic scripts such as East
Asian ideographs, collation can be
either phonetic or based on the
appearance of the character.
The Wikipedia article points out that since collation is so tough in non-alphabetic scripts, now a days the answer is to make it very easy to look up information by entering characters, rather than by looking through a list.
I suggest that you talk to truly knowledgeable end users of your application to see how they would best like it to behave. The problem of ordering Chinese characters is not unique to your application.
Also, if you don't want to implement the collation in your system, another solution would for you to create a Ajax service that stores the names in a MySql or other database, then looks up the data with an order statement.
Strings are compared character by character where the code point value defines the order:
The comparison of strings uses a simple lexicographic ordering on sequences of code point value values. There is no attempt to use the more complex, semantically oriented definitions of character or string equality and collating order defined in the Unicode specification. Therefore strings that are canonically equal according to the Unicode standard could test as unequal. In effect this algorithm assumes that both strings are already in normalised form.
If you need more than this, you will need to use a string comparison that can take collations into account.
Others have answered the other questions, I will take on this one:
what should one strive for in creating a
compare function for those languages?
One way to do it is that, you will need to create a program that can "read" the characters; that is, able to map hanzi/kanji characters to their "sound" (pinyin/hiragana reading). At the simplest level, this means a database that maps hanzi/kanji to sounds. Of course this is more difficult than it sounds (pun not intended), since a lot of characters can have different pronunciations in different contexts, and Chinese have many different dialects to consider.
Another way, is to order by stroke order. This means there would need to be a database that maps hanzi/kanji to their strokes. Another problem: Chinese and Japanese writes in different stroke orders. However, aside from Japanese and Chinese difference, using stroke ordering is much more consistent within a single text, since hanzi/kanji characters are almost always written using the same stroke order irrespective of what they meant or how they are read. A similar idea is to sort by radicals instead of plain stroke orders.
The third way, is sorting by Unicode code points. This is simple, and always gives undisputably consistent ordering; however, the problem is that the sort order is meaningless for human.
The last way is to rethink about the need for absolute ordering, and just use some heuristic to sort by relevance to the user's need. For example, in a shopping cart software, you can sort depending on user's buying habits or by price. This kinda avoids the problem, but most of the time it works (except if you're compiling a dictionary).
As you notice, the first two methods require creating a huge database of one-to-many mapping, but they still doesn't always give a useful result. The third method also require a huge database, but many programming languages already have this database built into the language. The last way is a bit of heuristic, probably most useful, however they are doomed to never give consistent ordering (much worse than the first two method).
Yes, the characters get compared. They are usually compared based on their Unicode code points, though, which are quite different between hiragana and kanji -- making the sort potentially useless in Japanese. (Kanji borrowed from Chinese, but the order they'd appear in Chinese doesn't correspond to the order of the hiragana that'd represent the same meaning). There are collations that could render some of the characters "equal" for comparison purposes, but i don't know if there's one that'll consider a kanji to be equivalent to the hiragana that'd comprise its pronunciation -- especially since a character can have a number of different pronunciations.
In Chinese or Korean, or other languages that don't have 3 different alphabets (one of which is quite irregular), it'd probably be less of an issue.
Those are sorted by codepoint value, ascending. This is certainly meaningless for human readers. It's not impossible to devise a sensible sorting scheme for Japanese, but sorting Chinese characters is hard (partly because we don't necessarily know whether we're looking at Japanese or Chinese), and lot of programmers punt to this solution.
The normal string comparison functions in many programming languages are designed to ensure that strings can be sorted into a unique order, to allow algorithms like binary search and duplicate-detection to work correctly. To sort data in a fashion meaningful to a human reader, one must know what the data represents. For example, in a list of English movie titles, "El Mariachi" would typically sort under "E", but in a list of Spanish movie titles it would sort under "M". The application will need information beyond that contained in the strings themselves to know how the strings should be sorted.
The answers to Q1 (can you sort) and Q3 (is sort meaningful) are both "yes" for Chinese (from a mainland perspective). For Q2 (how to sort):
All Chinese characters have definite pronunciation (some are polyphonic) as defined in pinyin, and it's far more common (as in virtually all Chinese dictionaries) to sort by pinyin, where there is no ambiguity. Characters with the same pronunciation are then sorted by stroke order.
The polyphonic characters pose extra challenge for sorting, as their pinyin usually depends on the word they are in (I heard Japanese characters could be even more hairy). For example, the character 阿 is pronounced a(1) in 阿姨 (tone in parenthesis), and e(1) in 阿胶. So if you need to sort words or sentences, you cannot simply look at one character at a time from each item.
Recall that in JavaScript, you can pass into sort() a function in which you can implement sort yourself, in order to achieve a sort that matters to humans:
myarray.sort(function(a,b){
//return 0, 1, or -1 based on the comparison of the two strings
});