LinkedIn follow button language issue - javascript

I'm trying to get the LinkedIn "follow" button to display in other languages on my site, which already does a lot of localization based on the browser settings, and I've noticed that the LinkedIn API seems to not accept language codes that are "simple", i.e. 2 letters like "fr". It seems to only want codes like "fr_FR". with "fr" i see an error like:
'fr' is not a supported language, defaulting to 'en_US'
This is not acceptable, since many users may just pick a language without a specific country variant, because many (all?) browsers allow this.
What's an easy way around this?

The answer would be a programmatic method to fake a ISO3166 code if there's only 2 letters in the browser's languages-accepted setting. But I don't think there's a way to always be correct, although it looks like with major languages there seems to be a corresponding ISO3166 country code identical to the language (fr_FR, de_DE, es_ES - but, interestingly: no en_EN ). So I guess the hack way would be to just add the underscore and the uppercase version of the 2 letters, if they're not there already, and let it break and default to US english for those languages that don't have a variant like that. But, I'd rather do better.
It's unfortunate LinkedIn's API is acting this way, since the standards seem to clearly state that the country/region code is optional. see http://en.wikipedia.org/wiki/IETF_language_tag

Related

HTML textarea spellcheck: How to tell if user makes spelling errors

I have a textarea that is defined thus:
<textarea spellcheck="true"></textarea>
When users type, spelling mistakes are highlighted to them (using a red underline, for my browser). Is there any way (using jQuery) to check whether there are spelling mistakes before a user submits the form?
This is what I want to achieve:
Form input textarea: [Typing some text in thsi box] [submit]
Before the user clicks submit, I would like a listener to "catch" the fact that "thsi" was spelled incorrectly and prompt the user. Is there any way to do this via the html5 spellcheck method, or do I have to use a custom javascript function for the spellchecking and listening?
A quick search brought up this jQuery plug-in that seems to do exactly what you want and it uses the Google spell-checking API https://code.google.com/p/jquery-spellchecker/wiki/Documentation
There is also Typo.js https://github.com/cfinke/Typo.js, which is a client-side based library. It does not use any API, instead it uses Hunspell-style dictionaries and it is only available for American English "EN_US".
If you don't want to use a plug-in or an existing code snippet, you can build your own by sending an ajax request to check the typed text against a service provider (Google in this case).
you can use jquery plugin for checking spelling.
i hope it helps you, thanks.
JavaScript SpellCheck
http://www.javascriptspellcheck.com/
If you have to build it natively you might consider building a Trie datastructure for this
check this Trie reference 1
Trie reference 2
Hope this helps
You have different ways to achieve it, it depends if your spelling has to be focused on a subject (like medical word spelling) or is it general.
Create yourself a dictionary (not the best choice and too long to make)
make a query to online dictionaries like google
try Jspell Evolution (the installation is a little annoying but once done it works very well Jspell Evolution website
you can look at typo.js typo.js article
Yesterday I found this article that is 10 times better than the others :
Article for javascript spell check locales where you can also have spelling for other languages/locales and not only english locale.

How to implement different languages on html page

I am just a newcomer developing an app with html/css/js via phonegap. I've been searching info on how to make my app be displayed in different languages and Google doesn't understand me.
So the idea is to have a button on index.html that let the user choose the language in which the app will be displayed, in this case Spanish/English, nothing strange like arabic blablabla....
So I guess that the solution must be related to transform all the text that I load in html to variables and then depending on the language selected display the correct one. I have no idea how to make this, and Im not able to find examples. So that's what Im asking for... if someone could give some code snipet to see how html variables works and how should I save user language selection...
Appreciated guys!
This can be done by internationalization (such as i18N). To do this you need separate file for each language and put all your text in it. Search Google for internationalization.
Otherwise you can look into embeding Google Translate.
This depends on the complexity of language-dependencies in the application. If you have just a handful of short texts in a strongly graphic application, you can just store the texts in JavaScript variables or, better, in properties of an object, with one object per language.
But if you expect to encounter deeper language-dependencies as well (e.g., displaying dynamically computed decimal numbers, which should be e.g. 1.5 in English and 1,5 in Spanish), then it’s probably better to use a library like Globalize.js (described in some detail in my book Going Global with JavaScript and Globalize.js). That way you could use a unified approach, writing e.g. a string using Globalize.localize('greeting') and a number using Globalize.format(x, 'n1') and a date using Globalize.format(date, 'MMM d').

Is it possible to tell MathJax to use fonts for specific languages?

I love MathJax, but it does not support Cyrillic (or at least it looks so). I tried such simple text as an expression into this official example - it shows all English
letters but no Russian.
Mother Phather Love Forever Мама мыла Раму
So I wonder - how to extend MathJax to support non English languages?
Well, when I enter мыла, I do get the Cyrillic for it, but it will depend on the fonts you have available on your computer, and what browser you are using. IE, in particular, may not find the characters on its own without suggesting a font for it to work from. So you might try something like
{\large\style{font-family:Times}{\text{мыла}}}
instead, which should work better. My results for this are
(source: dpvc at www.math.union.edu)
which seems right to me (though I don't know Russian).
It is also possible to configure MathJax to use the current page font for the results of \text{} macros, which might help you as well. The page you cite does not have that configuration, however, so forcing the font as in the example above is required in this case. You would need to set mtextFontInherit to true in the "HTML-CSS" section of your configuration for that.

wikionary API - meaning of words

I would like get meaning of selected word using wikionary API.
Content retrieve data should be the same as is presented in "Word of the day", only the basic meaning without etympology, Synonyms etc..
for example
"postiche n
Any item of false hair worn on the head or face, such as a false beard or wig."
I tried use documentation but i can find similar example, can anybody help with this problem?
Although MediaWiki has an API (api.php), it might be easiest for your purposes to just use the action=raw parameter to index.php if you just want to retrieve the source code of one revision (not wrapped in XML, JSON, etc., as opposed to the API).
For example, this is the raw word of the day page for November 14:
http://en.wiktionary.org/w/index.php?title=Wiktionary:Word_of_the_day/November_14&action=raw
What's unfortunate is that the format of wiki pages focuses on presentation (for the human reader) rather than on semantics (for the machine), so you should not be surprised that there is no "get word definition" API command. Instead, your script will have to make sense of the numerous text formatting templates that Wiktionary editors have created and used, as well as complex presentational formatting syntax, including headings, unordered lists, and others. For example, here is the source code for the page "overflow":
http://en.wiktionary.org/w/index.php?title=overflow&action=raw
There is a "generate XML parse tree" option in the API, but it doesn't break much of the presentational formatting into XML. Just see for yourself:
http://en.wiktionary.org/w/api.php?action=query&titles=overflow&prop=revisions&rvprop=content&rvgeneratexml=&format=jsonfm
In case you are wondering whether there exists a parser for MediaWiki-format pages other than MediaWiki, no, there isn't. At least not anything written in JavaScript that's currently maintained (see list of alternative parsers, and check the web sites of the two listed ones). And even then, supporting most/all of the common templates will be a big challenge. Good luck.
OK, I admit defeat.
There are some files relating to Wiktionary in Pywikipediabot and I looking at the code, it does look like you should be able to get it to parse meaning/definition fields for you.
However the last half an hour has convinced me otherwise. The code is not well written and I wonder if it has ever worked.
So I defer to idealmachine's answer, but I thought I would post this to save anyone else from making the same mistakes. :)
As mentioned earlier, the content of the Wiktionary pages is in human-readable format, wikitext, so MediaWiki API doesn't allow to get word meaning because the data is not structured.
However, each page follows specific convention, so it's not that hard to extract the meanings from the wikitext. Also, there're some APIs, like Wordnik or Lingua Robot that parse Wiktionary content and provide it in JSON format.
MediaWiki does have an API but it's low-level and has no support for anything specific to each wiki. For instance it has no encyclopedia support for Wikipedia and no dictionary support for Wiktionary. You can retrieve the raw wikitext markup of a page or a section using the API but you will have to parse it yourself.
The first caveat is that each Wiktionary has evolved its own format but I assume you are only interested in the English Wiktionary. One cheap trick many tools use is to get the first line which begins with the '#' character. This will usually be the text of the definition of the first sense of the first homonym.
Another caveat is that every Wiktionary uses many wiki templates so if you are looking at the raw text you will see plenty of these. The only way to reliably expand these templates is by calling the API with action=parse.

What ideas do you think can it be applied to this GUI to make it more effective for real people usage?

I am talking about Google Text Translation User Interface, in Google Language Tools.
I like the fact that you can get translations of text for a lot of languages. However, I think is not so good always to show all options of translation. I believe is preferably to show, in first instance, only the most frequent options for text translation.
Really, it has become very annoying trying to translate from English to spanish, for example. Using the keyboard (E, Tab, then S Key repeatedly), the first three options presented are Serbian, Slovak, Slovenian, and finally Spanish...
Another example: from English to French. Using the keyboard again (F key repeatedly) shows Filipino and Finish before French!!!
What sort of ideas do you think can it be applied to this GUI to make it more effective for real people usage?
I think it's probably fine. There are only a little over 30 languages in the list, and close to half of them are pretty common languages, so I don't think it really makes sense to put the common ones first. It's not like a country list where you have to search through 180+ countries to find yours.
The only thing I would probably do is use a cookie to store your last language selection(s).
I think the best would be an autocomplete input field similar to the one used for tags on Stack Overflow and the one used for search on Facebook. Each letter you type narrows the field of results down and allows you to easily choose the right one with either the mouse or the arrow keys.
You could also keep track of the most popular ones and sort the results by most frequently used, like Stack Overflow does with their suggested tags.
I've been frustrated with this interface as well. I think it would be a good idea to (a) use cookies to give preference to the languages this user has selected in the past; and (b) to display a limited list (4-8 languages) of the most common languages, with a "more..." option that expands the list.
I really appreciate the fact that a lot of websites and software applications have started using this approach when asking you to specify your time zone. Why display "Mid-Atlantic", "Azores", etc. if you expect 95% of your users to be in (for example) the 5 U.S. time zones.
The simplest way to do what you are asking is to sort by request frequency and then by alpha/numeric. This will put languages where translation requests are most common to the top. It still won't solve your problem perfectly, but it would be an easy improvement, and one that would work better for most people.
Now, if only there were some google employees who came to this website ;-)
I'd try and detect their locale through browser/ISP meta data if I could, then default to that - but most people expect an alphabeticly-ordered list of languages. What if they're looking for Serbian, but after they hit 'S' once they get spanish, with no Serbian in sight? They might assume that there is no Serbian, since it's not where they expect it (before spanish) and leave. That'd be bad.
I would agree with most previous responses, as plain as this page is there is not much you could do upfront, languages should stay sorted alphabetically.
But there are some things that one could do in the background, store last settings or letting you bookmark translation settings.
Don't forget that some browser will let you do multiple letters for shortcuts, e.g. in Firefox you can type 'SP' to get to spanish.

Categories

Resources