Break this regex pattern - javascript

Sorry for the somewhat truthful misleading title.
I am currently trying to grasp several formats of dates, within a given sentence, will add more as time permits and neccessity arises, but for the most part this is what I have.
https://regex101.com/r/vV0uZ3/1
The main reason why I said break this, is because I can't think of any other types of dates that could mess this up, but I also feel that it is inefficient, semi new to regexes, and this seems to work, but I feel that there is a better manner in which to do it as well, this is somewhat time insensitive data processing, but speed is always a needed point in the end game.
Shortened scope, are there (base) formats of date inputs that this would not be able to pull in?
Also would it be better to use the case insensitive flag 'i' or pull in the full (base) alphabet into it [A-z], for time, and solution comprehension?
EDIT~~
I may be misrepresenting my question a bit, here goes.
I more or less will be reading in human inputs, and mostly american date formats within a document.
I have seen the comments made, and they are definitely sensible, just not too sure if at the moment those edge cases would be plausible, human randomness is a beautiful thing though, so side question, would having a regex that could catch all of these date formats(n/m/e | n-m-e | n m, e |...) heavily bog down my code in the long ru, and would the trade off for maleability be worth the slight shift in efficiency?

Related

Dealing with illegal characters (apostrophes) in a TXT file with Node.js

I am relying on .txt files being sent externally in Node.js that sometimes have what i would class as "illegal" characters such as apostrophes and commas resulting in copying and pasting from webpages and programs such as Microsoft Word
How can I get Node.js or use Javascript to replace these incorrect formats such as apostrophes with correctly formatted apostrophes or strip out any illegal characters full stop?
Here is an example from a web page and shown in PasteBin:
Resilience is what happens when we’re able to move forward even when things don’t fit together the way we expect.
And tolerances are an engineer’s measurement of how well the parts meet spec. (The word ‘precision’ comes to mind). A 2018 Lexus is better than 1968 Camaro because every single part in the car fits together dramatically better. The tolerances are more narrow now.
One way to ensure that things work out the way you hope is to spend the time and money to ensure that every part, every form, every worker meets spec. Tighten your spec, increase precision and you’ll discover that systems become more reliable.
The other alternative is to embrace the fact that nothing is ever exactly on spec, and to build resilient systems.
You’ll probably find that while precision feels like the way forward, resilience, the ability to thrive when things go wrong, is a much safer bet.
The trap? Hoping for one, the other or both but not doing the work to make it likely. What will you do when it doesn’t work?
Neither resilience nor tolerances get better on their own.
https://pastebin.com/uJ7GAKk4
Copied from the following URL and pasted into Notepad and saved
https://seths.blog/storyoftheweek/
You could use a RegExp to remove the unwanted characters
// text is the pasted text
var filtered = text.replace(/[',]/gm, '');

Store language (ISO 639) as Number

I'm working on a MongoDB database and so far have stored some information as Numbers instead of Strings because I assumed that would be more efficient. For example, I store countries following ISO 3166-1 numeric and sex following ISO/IEC 5218. But so far I have not found a similar standard for languages, ISO 639 does not appear to have a matching list of numeric codes.
What would be the right way to do this? Should I just use the String codes?
Thanks!
If you're a fan of the numbers, you can use country calling codes, although they "only" represent the ITU members (193 countries according to Wikipedia). But hey, they have Somalia and Palestine, so that's a good hint about how global this is.
However, storing everything in an encoded format (numbers here) implies a decoding step on the fly when any piece of data is requested (with translation tables stored in RAM instead of DB's ROM). Probably on the server whose CPU is precious, but you might have deported the issue on the client, overworking the precious, time-critical server-client link in the process.
So, back in the 90s, when a 40MB HDD was expensive, that might have been interesting. Today, the cost of storing data vs. the cost of processing data is not on the same side of 1... Not counting the time it takes you to think and implement the transformations. All being said "IMHO", I think this level of efficiency actually kills efficiency. ;)
EDIT: Oops, just realized I misthought (does that verb even exist?) the country/language issue. Countries you have sorted out already, my bad. I know no numbered list of languages. However, the second part of the post might still be relevant...
If you are after raw performance and/or want to achieve really small data sizes, I would suggest you use either the three-letter (higher granularity) or the two-letter (lower granularity) codes from IOC ISO-639-1/2.
To my knowledge, there's no helper or anything for this standard built into any programming language that I know, so you'd need to build your own translator (code<->full name) which, however, should be trivial.
And as others already mentioned, you have to assess the cost involved with this (e.g. not being able to simply look at the data and understand it right away anymore) for yourself. I personally do recommend keeping data sizes small since BSON parsing and string operations are horribly expensive compared to dealing with numbers (or shorter strings for that matter). When dealing with small data sets, this won't make a noticeable difference. If, however, you need to churn through millions of documents or more optimizations like this can become mission critical.

Parsing a string into a custom object based on different criteria

As part of a small project I'm working on, I need to be able to parse a string into a custom object, which represents an action, date and a few other properties. The tricky part is that the input string can come in a variety of flavors that all need to be properly parsed.
Input strings may be in the following formats:
Go to work tomorrow at 9am
Wash my car on Monday, at 3 pm
Call the doctor next Tuesday at 10am
Fill out the rebate form in 3 days at 2:30pm
Wake me up every day at 7:00am
And the output object would look something like this:
{
"Action":"Wash my car",
"DateTime":"2011-12-26 3:00PM", // Format is irrelevant at this point
"Recurring":False,
"RecurranceType":""
}
At first I thought of constructing some sort of tree to represent different states (On, In, Every, etc.) with different outcomes and further states (candidate for a state machine, right?). However, the more I thought about this, the more it started looking like a grammar parsing problem. Due to a (limited) number of ways the sentence could be formed, it looks like some sort of grammar parsing algorithm would need to be implemented.
In addition, I'm doing this on the front end, so JavaScript is the language of choice here. Back end will be written in Python and could be used by calling AJAX methods, if necessary, but I'd prefer to keep it all in JavaScript. (To be honest, I don't think the language is a big issue here).
So, am I in way over my head? I have a strong JavaScript background, but nothing beyond school courses when it comes to language design, parsing, etc. Is there a better way to solve this problem? Any suggestions are greatly appreciated.
I don't know a lot about grammar parsing, but something here might help.
My first thought is that your sentence syntax seems to be pretty consistent
1st 3-4 words are generally VERB text NOUN, followed by some form of time. If the total options are limited to what form the sentence can take, you can hard-code some parsing rules.
I also ran across a couple of js grammar parsers that might get you somewhere:
http://jscc.jmksf.com/
http://pegjs.majda.cz/
http://www.corion.net/perl-dev/Javascript-Grammar.html
This is an interesting problem you have. Please update this with your solutions later.

tunable diff algorithm

I'm interested in finding a more-sophisticated-than-typical algorithm for finding differences between strings, that can be "tuned" via some parameters, to balance between such things as "maximize count of identical characters" vs. "maximize the length of spans" vs. "try to keep whole words intact".
Ultimately, I want to be able to make the results as human readable as possible. For instance, if a long sentence has been replaced with an entirely new sentence, where the only things it has in common with the original are the words "the" "and" and "a" in that order, I might want it treated as if the whole sentence is changed, rather than just that 4 particular spans are changed --- just like how a reasonable person would see it.
Does such a thing exist? Although I'm working in javascript/node.js, an algorithm in any language would be helpful.
I'm actually ok with something that uses Monte Carlo methods or the like, if its results are better. Computation time is not an issue (within reason), nor is determinism.
Note: although this is beyond the scope of what I'm asking, I'll throw one more thing out there just in case: It would also be great if it could recognize changes that are out of order....for instance if someone changes the order of two paragraphs while leaving them otherwise identical, it would be awesome if it recognized it as a simple move, rather than as one subtraction and and one unrelated addition.
I've had good luck with diff_match_patch. There are some good options for tuning it for readability.
Try http://prettydiff.com/ Its code is already formatted for compatibility with CommonJS, which is the framework Node uses.

What does sorting mean in non-alphabetic (i.e, Asian) languages?

I have some code that sorts table columns by object properties. It occurred to me that in Japanese or Chinese (non-alphabetical languages), the strings that are sent to the sort function would be compared the way an alphabetical language would.
Take for example a list of Japanese surnames:
寿拘 (Suzuki)
松坂 (Matsuzaka)
松井 (Matsui)
山田 (Yamada)
藤本 (Fujimoto)
When I sort the above list via Javascript, the result is:
寿拘 (Suzuki)
山田 (Yamada)
松井 (Matsui)
松坂 (Matsuzaka)
藤本 (Fujimoto)
This is different from the ordering of the Japanese syllabary, which would arrange the list phonetically (the way a Japanese dictionary would):
寿拘 (Suzuki)
藤本 (Fujimoto)
松井 (Matsui)
松坂 (Matsuzaka)
山田 (Yamada)
What I want to know is:
Does one double-byte character really get compared against the other in a sort function?
What really goes on in such a sort?
(Extra credit) Does the result of such a sort mean anything at all? Does the concept of sorting really work in Asian (and other) languages? If so, what does it mean and what should one strive for in creating a compare function for those languages?
ADDENDUM TO SUMMARIZE ANSWERS AND DRAW CONCLUSIONS:
First, thanks to all who contributed to the discussion. This has been very informative and helpful. Special shout-outs to bobince, Lie Ryan, Gumbo, Jeffrey Zheng, and Larry K, for their in-depth and thoughtful analyses. I awarded the check mark to Larry K for pointing me toward a solution my question failed to foresee, but I up-ticked every answer I found useful.
The consensus appears to be that:
Chinese and Japanese character strings are sorted by Unicode code points, and their ordering may be predicated on a rationale that may be in some way intelligible to knowledgeable readers but is not likely to be of much practical value in helping users to find the information they're seeking.
The kind of compare function that would be required to make a sort semantically or phonetically useful is far too cumbersome to consider pursuing, especially since the results would probably be less than satisfactory, and in any case the comparison algorithms would have to be changed for each language. Best just to allow the sort to proceed without even attempting a compare function.
I was probably asking the wrong question here. That is, I was thinking too much "inside the box" without considering that the real question is not how do I make sorting useful in these languages, but how do I provide the user with a useful way of finding items in a list. Westerners automatically think of sorting for this purpose, and I was guilty of that. Larry K pointed me to a Wikipedia article that suggests a filtering function might be more useful for Asian readers. This is what I plan to pursue, as it's at least as fast as sorting, client-side. I will keep the column sorting because it's well understood in Western languages, and because speakers of any language would find the sorting of dates and other numerical-based data types useful. But I will also add that filtering mechanism, which would be useful in long lists for any language.
Does one double-byte character really get compared against the other in a sort function?
The native String type in JavaScript is based on UTF-16 code units, and that's what gets compared. For characters in the Basic Multilingual Plane (which all these are), this is the same as Unicode code points.
The term ‘double-byte’ as in encodings like Shift-JIS has no meaning in a web context: DOM and JavaScript strings are natively Unicode, the original bytes in the encoded page received by the browser are long gone.
Does the result of such a sort mean anything at all?
Little. Unicode code points do not claim to offer any particular ordering... for one, because there is no globally-accepted ordering. Even for the most basic case of ASCII Latin characters, languages disagree (eg. on whether v and w are the same letter, or whether the uppercase of i is I or İ). And CJK gets much gnarlier than that.
The main Unicode CJK Unified Ideographs block happens to be ordered by radical and number of strokes (Kangxi dictionary order), which may be vaguely useful. But use characters from any of the other CJK extension blocks, or mix in some kana, or romaji, and there will be no meaningful ordering between them.
The Unicode Consortium do attempt to define some general ordering rules, but it's complex and not generally attempted at a language level. Systems that really need language-sensitive sorting abilities (eg. OSes, databases) tend to have their own collation schemes.
This is different from the ordering of the Japanese syllabary
Yes. Above and beyond collation issues in general, it's a massively difficult task to handle kanji accurately by syllable, because you have to guess at the pronunciation. JavaScript can't realistically know that by ‘藤本’ you mean ‘Fujimoto’ and not ‘touhon’; this sort of thing requires in-depth built-in dictionaries and still-unreliable heuristics... not the sort of thing you want to build in to a programming language.
You could implement the Unicode Collation Algorithm in Javascript if you want something better than the default JS sort for strings. Might improve some things. Though as the Unicode doc states:
Collation is not uniform; it varies
according to language and culture:
Germans, French and Swedes sort the
same characters differently. It may
also vary by specific application:
even within the same language,
dictionaries may sort differently than
phonebooks or book indices. For
non-alphabetic scripts such as East
Asian ideographs, collation can be
either phonetic or based on the
appearance of the character.
The Wikipedia article points out that since collation is so tough in non-alphabetic scripts, now a days the answer is to make it very easy to look up information by entering characters, rather than by looking through a list.
I suggest that you talk to truly knowledgeable end users of your application to see how they would best like it to behave. The problem of ordering Chinese characters is not unique to your application.
Also, if you don't want to implement the collation in your system, another solution would for you to create a Ajax service that stores the names in a MySql or other database, then looks up the data with an order statement.
Strings are compared character by character where the code point value defines the order:
The comparison of strings uses a simple lexicographic ordering on sequences of code point value values. There is no attempt to use the more complex, semantically oriented definitions of character or string equality and collating order defined in the Unicode specification. Therefore strings that are canonically equal according to the Unicode standard could test as unequal. In effect this algorithm assumes that both strings are already in normalised form.
If you need more than this, you will need to use a string comparison that can take collations into account.
Others have answered the other questions, I will take on this one:
what should one strive for in creating a
compare function for those languages?
One way to do it is that, you will need to create a program that can "read" the characters; that is, able to map hanzi/kanji characters to their "sound" (pinyin/hiragana reading). At the simplest level, this means a database that maps hanzi/kanji to sounds. Of course this is more difficult than it sounds (pun not intended), since a lot of characters can have different pronunciations in different contexts, and Chinese have many different dialects to consider.
Another way, is to order by stroke order. This means there would need to be a database that maps hanzi/kanji to their strokes. Another problem: Chinese and Japanese writes in different stroke orders. However, aside from Japanese and Chinese difference, using stroke ordering is much more consistent within a single text, since hanzi/kanji characters are almost always written using the same stroke order irrespective of what they meant or how they are read. A similar idea is to sort by radicals instead of plain stroke orders.
The third way, is sorting by Unicode code points. This is simple, and always gives undisputably consistent ordering; however, the problem is that the sort order is meaningless for human.
The last way is to rethink about the need for absolute ordering, and just use some heuristic to sort by relevance to the user's need. For example, in a shopping cart software, you can sort depending on user's buying habits or by price. This kinda avoids the problem, but most of the time it works (except if you're compiling a dictionary).
As you notice, the first two methods require creating a huge database of one-to-many mapping, but they still doesn't always give a useful result. The third method also require a huge database, but many programming languages already have this database built into the language. The last way is a bit of heuristic, probably most useful, however they are doomed to never give consistent ordering (much worse than the first two method).
Yes, the characters get compared. They are usually compared based on their Unicode code points, though, which are quite different between hiragana and kanji -- making the sort potentially useless in Japanese. (Kanji borrowed from Chinese, but the order they'd appear in Chinese doesn't correspond to the order of the hiragana that'd represent the same meaning). There are collations that could render some of the characters "equal" for comparison purposes, but i don't know if there's one that'll consider a kanji to be equivalent to the hiragana that'd comprise its pronunciation -- especially since a character can have a number of different pronunciations.
In Chinese or Korean, or other languages that don't have 3 different alphabets (one of which is quite irregular), it'd probably be less of an issue.
Those are sorted by codepoint value, ascending. This is certainly meaningless for human readers. It's not impossible to devise a sensible sorting scheme for Japanese, but sorting Chinese characters is hard (partly because we don't necessarily know whether we're looking at Japanese or Chinese), and lot of programmers punt to this solution.
The normal string comparison functions in many programming languages are designed to ensure that strings can be sorted into a unique order, to allow algorithms like binary search and duplicate-detection to work correctly. To sort data in a fashion meaningful to a human reader, one must know what the data represents. For example, in a list of English movie titles, "El Mariachi" would typically sort under "E", but in a list of Spanish movie titles it would sort under "M". The application will need information beyond that contained in the strings themselves to know how the strings should be sorted.
The answers to Q1 (can you sort) and Q3 (is sort meaningful) are both "yes" for Chinese (from a mainland perspective). For Q2 (how to sort):
All Chinese characters have definite pronunciation (some are polyphonic) as defined in pinyin, and it's far more common (as in virtually all Chinese dictionaries) to sort by pinyin, where there is no ambiguity. Characters with the same pronunciation are then sorted by stroke order.
The polyphonic characters pose extra challenge for sorting, as their pinyin usually depends on the word they are in (I heard Japanese characters could be even more hairy). For example, the character 阿 is pronounced a(1) in 阿姨 (tone in parenthesis), and e(1) in 阿胶. So if you need to sort words or sentences, you cannot simply look at one character at a time from each item.
Recall that in JavaScript, you can pass into sort() a function in which you can implement sort yourself, in order to achieve a sort that matters to humans:
myarray.sort(function(a,b){
//return 0, 1, or -1 based on the comparison of the two strings
});

Categories

Resources