Special character handling in javascript - javascript

I have some web pages. User can enter anything he wants into the forms which are in my web pages.
I want the special characters(which are visible and non visible on keyboard) to save in Database and retrieve them as it is.
Any suggestions ?

First of all, define what counts as a special character - since that description doesn't mean anything beyond "I think this might be handled differently."
Secondly, you shouldn't have to do anything extra in order to store these "special" characters (I'm guessing they're non-ASCII NLS characters) in the database - so long as the database supports these characters (you'll likely need to define your column as nvarchar). If the database doesn't support them at all, you'll have to store binary streams as BLOBs and just do all the decoding within your application.
So, as your question stands at the moment, my answer is simply:
Save the Unicode strings to a unicode column in the database
Load this column from the DB at a later point to retrieve them as-is.
If you've tried this, and are coming across any particular problems, then post them. But if you're merely investigating before implementing, I can't see why you'd run into issues.

Related

How to check if the input is emoji without using regular expression?

I'm new to web development and just trying to check if the user input contains emojis without using regex for performance reasons.
Is there a way to do it with JavaScript on the front end or by using java on the backend?
Java does not identify emoji as such
The official Unicode Character Database does not identify emoji characters as such, according to Annex A of Unicode® Technical Standard #51 UNICODE EMOJI.
I suppose that is why we do not see any kind of isEmoji method on the Java 13 class, Character.
Roll-your-own
According to that Annex A, there are emoji-data data files available describing aspects of emoji characters. If you are sufficiently motivated to reliably identify emoji characters, I suggest reading that Technical Note, and consider importing the data from those files to identify the code points of emoji. There may well be ranges of numbers that the Unicode Consortium uses to cluster the emoji characters.
Keep in mind that the Unicode Consortium in recent years has been frequently adding more and more emoji. So you will be chasing a moving target, needing updates.
You may be able to narrow down your ranges with the named ranges of code points defined in Character.UnicodeBlock.
I am guessing that Character.OTHER_SYMBOL may help, as the emoji I perused are so tagged, according to the handy macOS app, UnicodeChecker.
FYI, the Unicode Consortium does publish a list of emoji: Full Emoji List, v12.0.
By the way, the CLDR published by the Unicode Consortium and used by default in recent versions of Java defines how to sort emoji. Yes, emoji have sort-order: human faces before cat faces, and so on. The code points for emoji characters are assigned rather arbitrarily, so do not go by that for sorting.
Instead of trying to blacklist emojis, it'd probably be easier to whitelist the characters you do want to allow. If your site is multilingual, you'd have to add the characters of the languages you want to support. It should be relatively simple to loop over each character of your input and see if it's in the list of valid characters.
You'll want to do your validation on both the frontend and the backend. You want to do the frontend so you can show feedback to the user immediately, and you have to do validation on the backend so that people can't game your system by opening their browser's console or getting creative. Frontend stuff should never be trusted by the server in general.

# character became %40 after setting in a $.param

I am using angularJS in my project. One of the function in my controller is to check if the inputted email already exist in the database.If it exists, the system will notify the user that it is already been used. To do that, I have to use $http with $params. However, even if the inputted email already exist, no feedback is given by the system. So I checked what's the value being checked by alerting the $params.
$scope.pop=function(email){
$params=$.param({
'email':email
})
alert($params)
}
I found out that the # character in the email became %40. For example: I input d_unknown#yahoo.com, it became d_unknown%40yahoo.com.
I tried to check the original data by:
$scope.pop=function(email){
alert(email)
}
And it looks fine, nothing changes.
How can I solve this?
It's the server's job to turn the URL encoded email address back into its original value. So your JavaScript is working perfectly, there's nothing to do there. Your server code is what needs to be fixed.
And if the application on the server isn't even decoding parameters before it's running the query against the database, it makes me feel like you may have some security issues to fix as well. Make sure that your parameters are decoded and cleaned before they are used in SQL queries. You can read more on cleaning parameters and SQL injection here.
Well, it's not really a problem, as you can see from RFC 2936, section 2.4:
For original character sequences that contain non-ASCII characters,
however, the situation is more difficult. Internet protocols that
transmit octet sequences intended to represent character sequences
are expected to provide some way of identifying the charset used, if
there might be more than one [RFC2277]. However, there is currently
no provision within the generic URI syntax to accomplish this
identification. An individual URI scheme may require a single
charset, define a default charset, or provide a way to indicate the
charset used.
However, if you really want to do this, you could use the method decodeURIComponent() to decode this URL this way:
var email = "d_unknown%40yahoo.com";
console.log(decodeURIComponent(email));
That is beacause all special characters get encoded.
But you can decode the string before you use it with decodeURIComponent() (in js, but there are functions for php and all the others to)

Javascript Special Characters coming back incorrectly

There is a page where I have certain special characters on and when retrieving values of these via javascript I am getting an odd conversion. The character 'Œ' is coming back as 'R' and its lower case version 'œ' is coming back as 'S'. Is this a limitation of javascript or could it possibly be the browser. This is from testing in firefox. Also this is being retrieved via a repl client (Jssh/MozRepl) so it seems that it could be an issue with these clients themselves rather than the browser.
You likely have an encoding problem somewhere. There are many opportunities to mis-handle the encoding of text. If you post some code, we might be able to help you find it.
Output streams aren't scriptably safe for non-ASCII characters so you will need to wrap the stream in a nsIBinaryOutputStream, a nsIUnicharOutputStream or a nsIConverterOutputStream.

Character Encoding: â?

I am trying to piece together the mysterious string of characters â?? I am seeing quite a bit of in our database - I am fairly sure this is a result of conversion between character encodings, but I am not completely positive.
The users are able to enter text (or cut and paste) into a Ext-Js rich text editor. The data is posted to a severlet which persists it to the database, and when I view it in the database i see those strange characters...
is there any way to decode these back to their original meaning, if I was able to discover the correct encoding - or is there a loss of bits or bytes that has occured through the conversion process?
Users are cutting and pasting from multiple versions of MS Word and PDF. Does the encoding follow where the user copied from?
Thank you
website is UTF-8
We are using ms sql server 2005;
SELECT serverproperty('Collation') -- Server default collation.
Latin1_General_CI_AS
SELECT databasepropertyex('xxxx', 'Collation') -- Database default
SQL_Latin1_General_CP1_CI_AS
and the column:
Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation
text varchar no -1 yes no yes SQL_Latin1_General_CP1_CI_AS
The non-Unicode equivalents of the
nchar, nvarchar, and ntext data types
in SQL Server 2000 are listed below.
When Unicode data is inserted into one
of these non-Unicode data type columns
through a command string (otherwise
known as a "language event"), SQL
Server converts the data to the data
type using the code page associated
with the collation of the column. When
a character cannot be represented on a
code page, it is replaced by a
question mark (?), indicating the data
has been lost. Appearance of
unexpected characters or question
marks in your data indicates your data
has been converted from Unicode to
non-Unicode at some layer, and this
conversion resulted in lost
characters.
So this may be the root cause of the problem... and not an easy one to solve on our end.
â is encoded as 0xE2 in ISO-8859-1 and windows-1252. 0xE2 is also a lead byte for a three-byte sequence in UTF-8. (Specifically, for the range U+2000 to U+2FFF, which includes the windows-1252 characters –—‘’‚“”„†‡•…‰‹›€™).
So it looks like you have text encoded in UTF-8 that's getting misinterpreted as being in windows-1252, and displays as a â followed by two unprintable characters.
This is an something of an educated guess that you're just experiencing a naive conversion of Word/PDF documents to HTML. (windows-1252 to utf8 most likely) If that's the case probably 2/3 of the mysterious characters from Word documents are "smart quotes" and most of the rest are a result of their other "smart" editing features, elipsis, em dashes, etc. PDF's probably have similar features.
I would also guess that if the formatting after pasting into the ExtJS editor looks OK, then the encoding is getting passed along. Depending on the resulting use of the text, you may not need to convert.
If I'm still on base, and we're not talking about internationalization issues, then I can add that there are Word to HTML converters out there, but I don't know the details of how they operate, and I had mixed success when evaluating them. There is almost certainly some small information loss/error involved with such converters, since they need to make guesses about the original source of the "smart" characters. In my isolated case it was easier to just go back to the users and have them turn off the "smart" features.
The issue is clear: if the browser is good enough, a form in a web page can accept any Unicode character you can type or paste. If the character belongs to the HTML charset, it will be sent as is. If it doesn't, it'll get converted to an HTML entity. SQL Server will perform the appropriate conversion and silently corrupt your data when a character does not have an equivalent.
There's not much you can do to fully fix it but you can make a workaround: let your servlet perform the conversion. This way you have full control about it. You can, for instance, compile a list of the most common non-Latin1 characters users paste (smart quotes, unicode spaces...), which should be fairly easy to identify from context, and replace them with something else better that ?. Or you use a library that makes this for you.
Or you can switch your DB to Unicode :)
you're storing unicode data that uses 2 bytes per charcter into a varchar type columns that uses 1 byte per character. any text that uses 2 bytes per chars will have 1 byte lost when stored in the db.
all you need to do is change varchar column to nvarchar.
and then change sql parameters you're using in code of course.

Network-efficient difference between two strings in Javascript

I have a web application where a client side editor is editing a really really large text which is known on the server side.
The client can make any kind of modifications to this text.
What is the most network-efficient way to transmit the result difference in a way that the server understands? Also, since this will happen on client side (Javascript), I would also like it to be 'fast' (or at least not noticeably slow)
Some scenarios:
User modifies ONE character
User modifies several sentences in random positions
User erases everything and results in a blank text.
I cannot use diff-like syntax since it's not network efficent, it checks lines, where examples 1 and 3 will produce horrible differences (especially the last one, where the result will be more than the old itself).
Anyone has experience in this matter? User operates on a really large set of data - around 3-5MB of text, and uploading the whole "new" content is a big no-no.
To be clear, I'm looking for a "protocol" of transfer, string comparison is not the issue.
I'm not very familiar with this topic but I can point you to an open source (Apache License 2.0) project which may be very useful.
It is a Diff, Match and Patch library written in several languages, including JavaScript, from a Google engineer and it is used in several online collaborative editing services.
Here are a list of resources:
The Diff, Match and Patch project
The MobWrite project (Editor implementation based on the above project)
"Differential Synchronization" (A Google Tech Talk by the engineer)
A simple approach, assuming that you know the copy on the server isn't going to change, would just be to send a list of edits (deletions and additions), with the deletions represented as a start and end index, and the additions represented as a start index and the text to insert.
If you have more than a simple diff algorithm to work with (I'm not sure exactly what you mean by "string comparison is not the issue"), you could also detect moved or copied chunks of text, and send those as the start and end index of the moved or copied piece of text, as well as the destination to insert it.
Note that you'll need to make sure to keep track of whether your indices refer to the original document, or the document as edited so far. An easy approach to avoid this problem is to always perform the edits from the end of the document towards the beginning; then earlier edits won't affect the offsets specified by later edits.
For an example of an approach like this, see the ed format that diff -e outputs. This is basically input that could be fed into the ed line-oriented text editor. If you want the absolute smallest diffs to send across you may want to do character based indexing rather than line based indexing, but the same basic approach could work.
Any edits the user's performing can be efficiently broken down into: delete from X for length Y; insert at X text "whatever". X and Y are offsets in characters from the start of the text; Y is a number of characters; "whatever" is any string of characters. You say you need no help computing the diff, but an example is here, except it's richer in its output than you need, but does identify "removals and insertions", so, just change the output part.
The exact format in which you send the data to the server can be tuned, but I don't think there's much mileage in doing that -- pending measurement, I'd start by sending the commands as D for delete or I for insert, the numbers in decimal, the inserted string in quoted form. Once you have some statistics on actual transfers being performed, you can see how much overhead is in the numbers (decimal vs binary) and quotes, but I suspect that may not be all that meaningful (if it proves to be, there are all sort of things you can try, such as giving offsets from the latest point of insertion or deletion, rather than always from the start, to make things faster).
You can sample what the user is doing every few seconds, and just send the incremental changes over those last few seconds (if any) -- this way, each packet you're sending will be small, and if the net connection or the user's computer/browser crash, the user won't have lost much work.
You could just send changes every 500ms, so, whatever changes were made in the last 500ms would be sent, but you only send data when there was a change.
In this you could then send the position of the changed word(s) and just send the entire word, but I would have the position be from the front of the text.
It won't be several sentences worth, but there may be several words involved, but, if you send them in order of change then the result should be consistent.
Because there are so many ways to do edits--even within short periods of time like 500ms--including dragging and dropping, or cutting and pasting, large sections of text around within the document or from outside it--I don't know if there's going to be something that will cover all scenarios really well. This is certainly a non-answer to your question at face value, but I would consider carefully the trouble of developing and maintaining something like this compared to changing the interface to restrict the text size and breaking existing texts into smaller pieces.
Maybe that's not possible in your situation, but if it is, I would guess it would be much less trouble in the end to dodge the issue in this way and just send full documents after an edit.

Categories

Resources