Javascript Special Characters coming back incorrectly - javascript

There is a page where I have certain special characters on and when retrieving values of these via javascript I am getting an odd conversion. The character 'Œ' is coming back as 'R' and its lower case version 'œ' is coming back as 'S'. Is this a limitation of javascript or could it possibly be the browser. This is from testing in firefox. Also this is being retrieved via a repl client (Jssh/MozRepl) so it seems that it could be an issue with these clients themselves rather than the browser.

You likely have an encoding problem somewhere. There are many opportunities to mis-handle the encoding of text. If you post some code, we might be able to help you find it.

Output streams aren't scriptably safe for non-ASCII characters so you will need to wrap the stream in a nsIBinaryOutputStream, a nsIUnicharOutputStream or a nsIConverterOutputStream.

Related

Is there a length limitation when using replace method of a string?

I have a big string (1116902 char length) that I want to process with a regex (pretty simple one). I get a response from a soap server that is encoded in base64. So I just get the result between the appropriate xml tags and then decode the response.
This working for a small request. But when I get a big response back, the callback function of the replace() method is never called. I have tried to test the string on the regex101 website and it can find the result. So I wonder if there is a limitation in my JavaScript engine. I'm working on a Wakanda Server V10 that use Webkit as JavaScript engine. I cannot provide the string because it contains some enterprise information.
Here is my regex : /xsd:base64Binary">((.|\n)*?)<\/responseData>/
I taught it is maybe a special character that is not included in the ((.|\n)*?) group. But then why the regex101 find out the result (then maybe is the JavaScript engine)
Maybe anybody can help me?
Thanks
If you can guarantee that there are no tags between your start and end delimiter, which sounds like it might be the case, you could just change your RE to
/xsd:base64Binary">([^<]*)<\/responseData>/
which shouldn't require any backtracking and might work for you.
[^<] simply means everything but the < character. Since there shouldn't be any tags between the open and closing tags of your section (at least that's what I understand) that will accept everything until you hit your closing tag. The important thing is that the RE engine can tell immediately whether something matches that or not, so no branching or backtracking is required.

Parsing a JSON string of 50,000+ characters into a javascript object

I'm trying to evaluate a string of 50,000+ characters from an ajax GET request using jquery. On smaller datasets, the code will evaluate it correctly, but firefox throws an error "Unterminated string literal".
After some digging, I tried using external libraries from JSON.org, replacing \n, \r\n, and \r with an empty string (on the server), and encapsulating the eval() with parentheses.
Here is some of the client-side code (javascript):
http://pastebin.com/wsXuN7tb <- Here I've used an external library to do it
After looking through firebug, I noticed that the json string returned by the server was not complete, and was cut off at 50,000 or so characters. I know for a fact the server is returning a valid json string because I dumped it to a file before sending it to the client, but the client ends up receiving a truncated version.
Why is this happening? Is there any way around this?
URLs have a length limit that varies from browser to browser. 50,000+ characters is definitely WAY over every browser's limit. For such large data, you should be using a POST instead.
There is quite literally NOTHING you can do about this limit, as it's a browser limit, and not something you can change on the server. The only thing you can go is switch to using POST.
Turns out the NetworkStream I used in my c# server could not have a buffer that large, so I just wrote half of the buffer, flushed it, and wrote the other half.
Thanks for helping guys.

Character Encoding: â?

I am trying to piece together the mysterious string of characters â?? I am seeing quite a bit of in our database - I am fairly sure this is a result of conversion between character encodings, but I am not completely positive.
The users are able to enter text (or cut and paste) into a Ext-Js rich text editor. The data is posted to a severlet which persists it to the database, and when I view it in the database i see those strange characters...
is there any way to decode these back to their original meaning, if I was able to discover the correct encoding - or is there a loss of bits or bytes that has occured through the conversion process?
Users are cutting and pasting from multiple versions of MS Word and PDF. Does the encoding follow where the user copied from?
Thank you
website is UTF-8
We are using ms sql server 2005;
SELECT serverproperty('Collation') -- Server default collation.
Latin1_General_CI_AS
SELECT databasepropertyex('xxxx', 'Collation') -- Database default
SQL_Latin1_General_CP1_CI_AS
and the column:
Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation
text varchar no -1 yes no yes SQL_Latin1_General_CP1_CI_AS
The non-Unicode equivalents of the
nchar, nvarchar, and ntext data types
in SQL Server 2000 are listed below.
When Unicode data is inserted into one
of these non-Unicode data type columns
through a command string (otherwise
known as a "language event"), SQL
Server converts the data to the data
type using the code page associated
with the collation of the column. When
a character cannot be represented on a
code page, it is replaced by a
question mark (?), indicating the data
has been lost. Appearance of
unexpected characters or question
marks in your data indicates your data
has been converted from Unicode to
non-Unicode at some layer, and this
conversion resulted in lost
characters.
So this may be the root cause of the problem... and not an easy one to solve on our end.
â is encoded as 0xE2 in ISO-8859-1 and windows-1252. 0xE2 is also a lead byte for a three-byte sequence in UTF-8. (Specifically, for the range U+2000 to U+2FFF, which includes the windows-1252 characters –—‘’‚“”„†‡•…‰‹›€™).
So it looks like you have text encoded in UTF-8 that's getting misinterpreted as being in windows-1252, and displays as a â followed by two unprintable characters.
This is an something of an educated guess that you're just experiencing a naive conversion of Word/PDF documents to HTML. (windows-1252 to utf8 most likely) If that's the case probably 2/3 of the mysterious characters from Word documents are "smart quotes" and most of the rest are a result of their other "smart" editing features, elipsis, em dashes, etc. PDF's probably have similar features.
I would also guess that if the formatting after pasting into the ExtJS editor looks OK, then the encoding is getting passed along. Depending on the resulting use of the text, you may not need to convert.
If I'm still on base, and we're not talking about internationalization issues, then I can add that there are Word to HTML converters out there, but I don't know the details of how they operate, and I had mixed success when evaluating them. There is almost certainly some small information loss/error involved with such converters, since they need to make guesses about the original source of the "smart" characters. In my isolated case it was easier to just go back to the users and have them turn off the "smart" features.
The issue is clear: if the browser is good enough, a form in a web page can accept any Unicode character you can type or paste. If the character belongs to the HTML charset, it will be sent as is. If it doesn't, it'll get converted to an HTML entity. SQL Server will perform the appropriate conversion and silently corrupt your data when a character does not have an equivalent.
There's not much you can do to fully fix it but you can make a workaround: let your servlet perform the conversion. This way you have full control about it. You can, for instance, compile a list of the most common non-Latin1 characters users paste (smart quotes, unicode spaces...), which should be fairly easy to identify from context, and replace them with something else better that ?. Or you use a library that makes this for you.
Or you can switch your DB to Unicode :)
you're storing unicode data that uses 2 bytes per charcter into a varchar type columns that uses 1 byte per character. any text that uses 2 bytes per chars will have 1 byte lost when stored in the db.
all you need to do is change varchar column to nvarchar.
and then change sql parameters you're using in code of course.

Google AJAX Feed API, Dynamic Feed Control and the Japnese Language

English is fine but for Japanese feeds its showing invalid characters...
why i am getting invalid characters in Japnese feeds?
http://acsjapan.jp/j/index.html
not in english?
http://acsjapan.jp/
help me fix for japnese feeds..
This is an encoding issue.
You are using (implicit) ISO-8859-1 encoding on your web page. Your AJAX feed serves UTF-8 characters.
This is tricky: I don't think you can make the Google Service deliver its data in the ISO-8859-1 character set. The best way would be to switch your site to UTF-8 - but that may have deeper consequences, and require other changes, especially if you are using a CMS.
Mandatory basic reading: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)

Special character handling in javascript

I have some web pages. User can enter anything he wants into the forms which are in my web pages.
I want the special characters(which are visible and non visible on keyboard) to save in Database and retrieve them as it is.
Any suggestions ?
First of all, define what counts as a special character - since that description doesn't mean anything beyond "I think this might be handled differently."
Secondly, you shouldn't have to do anything extra in order to store these "special" characters (I'm guessing they're non-ASCII NLS characters) in the database - so long as the database supports these characters (you'll likely need to define your column as nvarchar). If the database doesn't support them at all, you'll have to store binary streams as BLOBs and just do all the decoding within your application.
So, as your question stands at the moment, my answer is simply:
Save the Unicode strings to a unicode column in the database
Load this column from the DB at a later point to retrieve them as-is.
If you've tried this, and are coming across any particular problems, then post them. But if you're merely investigating before implementing, I can't see why you'd run into issues.

Categories

Resources