What is the maximum length that $.parseJSON() can handle? - javascript

I have a long json array that needs to be sent to an html5 mobile app and parsed. The whole array has around 700kb (gziped to 150kb) and it's 554976 characters long at the moment. But it will increase on time.
Using jquery to parse the json, my app crashes while trying to parse it. And so does jsonlint, json parser.fr and any other online json validator I try, so I'm guessing eval() is not an option either.
Might be a broad question but what is the maximum "acceptable" length for a json array?
I have already removed as much data as I can from the array, the only option I can think of is to split the array in 3-4 server calls and parse it separately in the app. Is there any other option?
EDIT
Thanks to #fabien for pointing that if jsonlint crashes there is a problem on the json. There was a hidden "space" character in one of the nodes. It was parsed correctly on the server but not on the client.

I've parsed way bigger arrays with jquery.
My first guess is that there's an error in your json.
You have many ways to find it (sublime text could highlight the error but some time, it's a bit long). Try to paste it in a web tool like http://www.jsoneditoronline.org/. and use any of the buttons (to format or to send to the right view). It'll tell you where the error is.

Related

Javascript Torrent File Parsing

To date, I've not found any suitable .torrent file parsers coded in javascript, therefore I began creating my own.
So far, I was able to recode a php bdecoder in javascript, one issue I found is that larger .torrent files (like the second one in http://www.vuze.com/content/channel.php?id=53&name=Scam%20School%20(Quicktime%20HD)) sometimes result in Uncaught RangeError: Maximum call stack size exceeded errors in chrome. Is there a method to have the bdecode function run less recursively?
Along with this issue, I haven't been able to accurately produce an info hash for the '.torrent' files which decoded successfully. I hash the info dictionary beginning right after the info name and ending at the e 'closing tag'. However, this results in incorrect hashes compared to that of actual bittorrent clients. Am I reading the file incorrectly?
Current code:
http://jsfiddle.net/e23YQ/
Thanks.
Reading the torrent file using readAsTest or readAsBinaryString (which is deprecated) will not suffice for generating an accurate info hash. In order to keep things as native as possible, you must read the file as an ArrayBuffer and parse using Uint8Arrays. While parsing, save the beginning and ending offsets of the info dictionary for generating the info hash.
In order to generate an accurate info hash, you must use a javascript implementation of Sha-1 which allows for hashing of ArrayBuffers. Rusha seemed to be a viable option. Using the digestFromArrayBuffer in Rusha with a slice of the initial ArrayBuffer containing the info dictionary, we get an accurate info hash.
Using an ArrayBuffer eliminated the stackoverflow issue I was having earlier.
This is the adjusted code:
http://jsfiddle.net/e23YQ/5/

Parsing "Streaming" JSON

I have a grid in a browser.
I want to send rows of data to the grid via JSON, but the browser should continuously parse the JSON as it receives it and add rows to the grid as they are parsed. Put another way, the rows shouldn't be added to the grid all at once after the entire JSON object is received -- they should be added as they are received.
Is this possible? Particularly using jQuery, Jackson, and Spring 3 MVC?
Does this idea have a name? I only see bits of this idea sparsely documented online.
You can use Oboe.js which was built exactly for this use case.
Oboe.js is an open source Javascript library for loading JSON using streaming, combining the convenience of DOM with the speed and fluidity of SAX.
It can parse any JSON as a stream, is small enough to be a micro-library, doesn’t have dependencies, and doesn’t care which other libraries you need it to speak to.
You can't parse incomplete or invalid JSON using the browser's JSON.parse. If you are streaming text, it will invariably try and parse invalid JSON at some point which will cause it to fail. There exists streaming JSON parsers out there, you might be able to find something to suit your needs.
Easiest way in your case would remain to send complete JSON documents for each row.
Lazy.js is able to parse "streaming" JSON (demo).
Check out SignalR.
http://www.hanselman.com/blog/AsynchronousScalableWebApplicationsWithRealtimePersistentLongrunningConnectionsWithSignalR.aspx
March 2017 update:
Websockets allow you to mantain an open connection to the server that you can use to stream the data to the table. You could encode individual rows as JSON objects and send them, and each time one is received you can append it to the table. This is perhaps the optimal way to do it, but it requires using websockets which might not be fully supported by your technology stack. If websockets is not an option, then you can try requesting the data in the smallest chunks the server will allow, but this is costly since every request has an overhead and you would end up doing multiple requests to get the data.
AFAIK there is no way to start parsing an http request before it's finished, and there is no way to parse a JSON string partially. Also, getting the data is orders of magnitude slower than processing it so it's not really worth doing.
If you need to parse large amounts of data your best bet is to do the streaming approach.

Using Jquery AutoComplete with dictionary list

I have a dictionary list of about 58040 words and i don't think jquery auto complete can handle that many words as the browser hangs.
The list is
words = ['axxx','bxxx','cxxx', an so on];
$(".CreateAddKeyWords input").autocomplete({ source: words });
Am i doing something wrong
Is there another free tool that i can use
Edit
i am using .net and i have retrieved the data fro the database and can loop through the data server side, but how do you send the data back, if json format how should the format look like?
Is there another free tool that i can use
Yes, instead of hardcoding 58040 words in your HTML or javascript file you could load them from a remote datasource using AJAX. Basically you will have a server side script which when queries with the current user input will prefilter the result and send it to the client to display suggestions.
You should assign a minimum length of user entry before searching (so it isn't querying with 1 or 2 characters).
$(".CreateAddKeyWords input").autocomplete({ source: words, minLength: 3 });
It's possible the browser is hanging because it is trying to search on the very first character which is not very useful. ~58k entries is not a large dataset by most regards, especially when you narrow it by 2-3 character contents requirements.
That's just way too much data to have it load in your webpage. Limit it to 2 letters.
1) set the autocomplete min length to at least 2
2) Create a webpage that returns JSON data - http://mydomain.com/words.php?q={letters}
You can have the filter sort be 'begins with' before 'contains'; or any variation you prefer.
Use that page as your remote data source. With the min length set, autocomplete knows when to query for new data.
I thought this was an interesting problem, and hacked up a backend service that solves auto-completion.
My code is at https://github.com/badgerman/fastcgi/ (look for complete.c), and the quick and dirty javascript proof of concept from that repository is currently at http://enno.kn-bremen.de/prefix.html (no guarantees that it will stay up for very long, since this is running on the Raspberry Pi in my home).

Parsing a JSON string of 50,000+ characters into a javascript object

I'm trying to evaluate a string of 50,000+ characters from an ajax GET request using jquery. On smaller datasets, the code will evaluate it correctly, but firefox throws an error "Unterminated string literal".
After some digging, I tried using external libraries from JSON.org, replacing \n, \r\n, and \r with an empty string (on the server), and encapsulating the eval() with parentheses.
Here is some of the client-side code (javascript):
http://pastebin.com/wsXuN7tb <- Here I've used an external library to do it
After looking through firebug, I noticed that the json string returned by the server was not complete, and was cut off at 50,000 or so characters. I know for a fact the server is returning a valid json string because I dumped it to a file before sending it to the client, but the client ends up receiving a truncated version.
Why is this happening? Is there any way around this?
URLs have a length limit that varies from browser to browser. 50,000+ characters is definitely WAY over every browser's limit. For such large data, you should be using a POST instead.
There is quite literally NOTHING you can do about this limit, as it's a browser limit, and not something you can change on the server. The only thing you can go is switch to using POST.
Turns out the NetworkStream I used in my c# server could not have a buffer that large, so I just wrote half of the buffer, flushed it, and wrote the other half.
Thanks for helping guys.

Parsing very large JSON strings in IE causing problems

I'm parsing a 2MB JSON string in IE8. The JSON.Parse line is taking a little while to return and IE8 shows a message asking the user if they want to abort the script.
Is there any way I can suppress this message? (or somehow speed up JSON.Parse)
I know about Microsoft KB175500, however this is not suitable as my target users will not have administrator access to make the registry modifications on their SOE machines.
I had this same question. Apparently there is no way to suppress the message, but there are tricks to make IE think it's still working by using an asynchronous iteration pattern (dead link, view comments below).
This comes from an answer to one of my questions:
loop is too slow for IE7/8
If the browser is unhappy with how long the JSON parser is taking, there are only four choices here I know of:
Get a faster JSON parser that doesn't take so long.
Break up your JSON data into smaller pieces so you are only parsing smaller pieces at once.
Modify a JSON parser to work in chunks so it can parse part of the data in one chunk, then on a short timeout, parse the next chunk, etc... This will prevent the browser prompt, but is probably a lot of work to write/modify a JSON parser that works this way.
If you can be sure the content is safe, then you could see if using eval instead of a JSON parser works around the issue.

Categories

Resources