JSONP and a very long response with escaped characters - javascript

I've got the following problem: I am sending a AJAX request to a service which returns HTML code. There are unicode characters in this code, which will be escaped with the usual \u....
The problem is, that this response is very long and jQuery split those jsonp functions into a few functions. This is not the problem, besides the fact, when those escaped characters will be splitted inside, like jsonp463827("...blabhalbha\ud0");jsonp546114("0x8blablabla...");
Then it gives me an error which says Hexcode expected, because it cannot split those escaped characters.
Is there any solution to prevent this?

What exactly is being passed back? Example address?
I don't think jQuery is doing the splitting here. It is the nature of JSONP must return a block of JavaScript statements for direct execution in a <script> tag. The client-side can't get hold of that content to split or otherwise process it because that would be a cross-site-scripting hole, the very issue JSONP is designed to get around.
I think you'll probably need to look at that service. I'm not sure why it would be trying to split a response into several function calls as there is no limit on the length of the string passed in. The limit that you might hit is Firefox's script parser stack limit (see bug 420869), but that applies to the whole of the returned script block, so splitting into several function calls won't help.

Related

Error parsing JSON with escaped quotes inside single quotes

I have a variable var jsonData = '{"Key":"query","Value":"dept=\"Human Resources*\"","ValueType":"Edm.String"}';
I'm trying to parse the variable with JSON.parse(jsonData), however, I'm getting an error "Unexpected token H in JSON at position 30." I can't change how the variable is returned, so here's what I think I understand about the problem:
The JSON.parse(jsonData) errors out because it's not recognizing the escaped double quotes as escaped since it is fully enclosed in single quotes
jsonData.replace(/\\"/g, "\\\\"") or other combinations that I've tried aren't finding the \" because javascript treats \" as just "
QUESTION How can I parse this properly, by either replacing the escaped quotes with something JSON.parse() can handle or using something else to parse this correctly? I'd like to stick with JSON.parse() on account of it's simplicity, but open to other options.
EDIT: Unfortunately I can't change the variable at this stage, it is just a small example of a larger JSON response. This is a temporary solution until the app is granted access to the API, but I needed the solution in the interim until that happens (IT dept can be slow). What I'm doing now its getting a large JSON response back by hitting the API address directly and the browser uses the cookies from the user OAuth for authentication. I then copy and paste the JSON response into my application so I can work with the data. The response is riddled with the escaped quotes and manually editing the text would be laborious and I'm trying to avoid copying into text processor before copying into the variable.
You should escape the backslash character in your code by prefixing it with another backslash. So the code becomes:
var jsonData = '{"Key":"query","Value":"dept=\\"Human Resources*\\"","ValueType":"Edm.String"}';
The first backslash is so that JS puts the second backslash in the string, which must be in the string so that the json parser knows that it should ignore the quote character.
The unfortunate thing about this situation is that in the JavaScript code there is no difference between
var jsonData = '{"Key":"query","Value":"dept=\"Human Resources*\"","ValueType":"Edm.String"}'
and
var jsonData = '{"Key":"query","Value":"dept="Human Resources*"","ValueType":"Edm.String"}'.
You could hardcode information you have about the JSON into the way you program it. For example, you could replace occurences of the regex ([\[\{,:]\s+)\" by $1\" but this would fail to work if the string Human Resources* could also end in a :, { or ,. This would also potentially cause security issues.
In my opinion, the best way to solve your problem would be to put the json response in a json file somewhere so that it can be read into a string by the javascript code that needs to use it.
I think you can also dispense with the initial String to represent the JSON object:
Use a standard JSON object.
Make whatever changes you need on that object.
Call JSON.stringify(YOUR_OBJECT) for a String representation.
Then, JSON.parse(…) when you need an object again.
That should be able to satisfy your initial request, question, keep your current (escaped) String values, and give you some room to make a lot of changes.
To escape your current String value:
obj["Value"] = 'dept=\"Human Resources*\"'
Alternatively, you can nest attributes:
obj["Value"]["dept"] = "Human Resources*"
Which may be helpful for other reasons.
I've found that I've rarely worked with JSON in an enterprise or production environment where the above sequence wasn't used (I've never used a purely string representation in a production environment) simply due to the ease of modifying attributes, generating dynamic data/modifying the JSON object, and actually using the JSON programmatically.
Using string representations for what are really attribute key-value pairings often causes headaches later on (for example, when you want to read the Human Resources* value programmatically and use it).
I hope you find that approach helpful!

# character became %40 after setting in a $.param

I am using angularJS in my project. One of the function in my controller is to check if the inputted email already exist in the database.If it exists, the system will notify the user that it is already been used. To do that, I have to use $http with $params. However, even if the inputted email already exist, no feedback is given by the system. So I checked what's the value being checked by alerting the $params.
$scope.pop=function(email){
$params=$.param({
'email':email
})
alert($params)
}
I found out that the # character in the email became %40. For example: I input d_unknown#yahoo.com, it became d_unknown%40yahoo.com.
I tried to check the original data by:
$scope.pop=function(email){
alert(email)
}
And it looks fine, nothing changes.
How can I solve this?
It's the server's job to turn the URL encoded email address back into its original value. So your JavaScript is working perfectly, there's nothing to do there. Your server code is what needs to be fixed.
And if the application on the server isn't even decoding parameters before it's running the query against the database, it makes me feel like you may have some security issues to fix as well. Make sure that your parameters are decoded and cleaned before they are used in SQL queries. You can read more on cleaning parameters and SQL injection here.
Well, it's not really a problem, as you can see from RFC 2936, section 2.4:
For original character sequences that contain non-ASCII characters,
however, the situation is more difficult. Internet protocols that
transmit octet sequences intended to represent character sequences
are expected to provide some way of identifying the charset used, if
there might be more than one [RFC2277]. However, there is currently
no provision within the generic URI syntax to accomplish this
identification. An individual URI scheme may require a single
charset, define a default charset, or provide a way to indicate the
charset used.
However, if you really want to do this, you could use the method decodeURIComponent() to decode this URL this way:
var email = "d_unknown%40yahoo.com";
console.log(decodeURIComponent(email));
That is beacause all special characters get encoded.
But you can decode the string before you use it with decodeURIComponent() (in js, but there are functions for php and all the others to)

Is there a length limitation when using replace method of a string?

I have a big string (1116902 char length) that I want to process with a regex (pretty simple one). I get a response from a soap server that is encoded in base64. So I just get the result between the appropriate xml tags and then decode the response.
This working for a small request. But when I get a big response back, the callback function of the replace() method is never called. I have tried to test the string on the regex101 website and it can find the result. So I wonder if there is a limitation in my JavaScript engine. I'm working on a Wakanda Server V10 that use Webkit as JavaScript engine. I cannot provide the string because it contains some enterprise information.
Here is my regex : /xsd:base64Binary">((.|\n)*?)<\/responseData>/
I taught it is maybe a special character that is not included in the ((.|\n)*?) group. But then why the regex101 find out the result (then maybe is the JavaScript engine)
Maybe anybody can help me?
Thanks
If you can guarantee that there are no tags between your start and end delimiter, which sounds like it might be the case, you could just change your RE to
/xsd:base64Binary">([^<]*)<\/responseData>/
which shouldn't require any backtracking and might work for you.
[^<] simply means everything but the < character. Since there shouldn't be any tags between the open and closing tags of your section (at least that's what I understand) that will accept everything until you hit your closing tag. The important thing is that the RE engine can tell immediately whether something matches that or not, so no branching or backtracking is required.

Dynamically created JavaScript function not working with long parameter

I have several html A tag generated programmatically in ASP.NET with a JavaScript function taking long parameter in href. One of those has over 20K characters when it get assigned in backend, but I am seeing the actual link has only 5239 characters on the browser side and the JavaScript function does not have closing. So the link never works. I am thinking about workarounds for this implementation since it's not a good idea to put this much amount of data in links, but now I'm just curious about cause of the issue.
Examples of the code assigning values to the link:
HtmlAnchor.HRef = "javascript:doSomething('Import','" + strHeader_LineIds + "');"
In this case the variable strHeader_LineIds carries a string over 20k characters.
Example of what I'm actually seeing in client side:
<a id=anchor1 class=class1 href="javascript:doSomething('Import', 'blahblahblahblah....">Link Text</a>
Please note the javascript function has no closing here. But when I'm debugging in backend I do see the closing of the function.
I guess this issue may have something to do with the browser's URL limit? I am using IE and I learned IE has a maximum URL length limit as 2,083 characters from Here. But how can the link show up with 5,239 characters?
I've had a similar issue with javascript like dynamic functions created in code and then called. I found that I had to play with swapping out single quotes in the javascript function with double quotes or escaping the quotes.
Then again just reading your post could be a limit issue.
Have you tried assigning the long to an element in the background and then referencing that as part of the javacript. I know IE gets funny with spaces in passed in parameters.
I think found an answer to the issue though. According to This Article:
JavaScript URIs
The JavaScript protocol is used for bookmarklets (aka favlets), a lightweight form of extensibility that permits a user to click a button and run some stored JavaScript on the currently loaded page. In IE9, the team did some work to relax the length limit (from ~260 characters, if I recall correctly) to something significantly larger (~5kb, if I recall correctly).
So I just hit the ~5kb limit.

Parsing a JSON string of 50,000+ characters into a javascript object

I'm trying to evaluate a string of 50,000+ characters from an ajax GET request using jquery. On smaller datasets, the code will evaluate it correctly, but firefox throws an error "Unterminated string literal".
After some digging, I tried using external libraries from JSON.org, replacing \n, \r\n, and \r with an empty string (on the server), and encapsulating the eval() with parentheses.
Here is some of the client-side code (javascript):
http://pastebin.com/wsXuN7tb <- Here I've used an external library to do it
After looking through firebug, I noticed that the json string returned by the server was not complete, and was cut off at 50,000 or so characters. I know for a fact the server is returning a valid json string because I dumped it to a file before sending it to the client, but the client ends up receiving a truncated version.
Why is this happening? Is there any way around this?
URLs have a length limit that varies from browser to browser. 50,000+ characters is definitely WAY over every browser's limit. For such large data, you should be using a POST instead.
There is quite literally NOTHING you can do about this limit, as it's a browser limit, and not something you can change on the server. The only thing you can go is switch to using POST.
Turns out the NetworkStream I used in my c# server could not have a buffer that large, so I just wrote half of the buffer, flushed it, and wrote the other half.
Thanks for helping guys.

Categories

Resources