I'm parsing a 2MB JSON string in IE8. The JSON.Parse line is taking a little while to return and IE8 shows a message asking the user if they want to abort the script.
Is there any way I can suppress this message? (or somehow speed up JSON.Parse)
I know about Microsoft KB175500, however this is not suitable as my target users will not have administrator access to make the registry modifications on their SOE machines.
I had this same question. Apparently there is no way to suppress the message, but there are tricks to make IE think it's still working by using an asynchronous iteration pattern (dead link, view comments below).
This comes from an answer to one of my questions:
loop is too slow for IE7/8
If the browser is unhappy with how long the JSON parser is taking, there are only four choices here I know of:
Get a faster JSON parser that doesn't take so long.
Break up your JSON data into smaller pieces so you are only parsing smaller pieces at once.
Modify a JSON parser to work in chunks so it can parse part of the data in one chunk, then on a short timeout, parse the next chunk, etc... This will prevent the browser prompt, but is probably a lot of work to write/modify a JSON parser that works this way.
If you can be sure the content is safe, then you could see if using eval instead of a JSON parser works around the issue.
Related
All:
I wonder how can I make Chrome handle about 4GB data loaded into it? My use case is:
The Front End starts and tries to download 3GB json file data and makes some calculation. But Chrome always crash.
Any solution for this? Thanks
When you work with large data typically optimization rule is :
Don't read all data at once, don't save all data at once.
If your code allows perform calculations "step-by-step", split your JSON to small parts (for example, by 50Mb).
Of course, it works slowly, however this approach allows to keep memory.
This optimization rule is useful not only for JS and browser, but for various languages and platforms.
I have a long json array that needs to be sent to an html5 mobile app and parsed. The whole array has around 700kb (gziped to 150kb) and it's 554976 characters long at the moment. But it will increase on time.
Using jquery to parse the json, my app crashes while trying to parse it. And so does jsonlint, json parser.fr and any other online json validator I try, so I'm guessing eval() is not an option either.
Might be a broad question but what is the maximum "acceptable" length for a json array?
I have already removed as much data as I can from the array, the only option I can think of is to split the array in 3-4 server calls and parse it separately in the app. Is there any other option?
EDIT
Thanks to #fabien for pointing that if jsonlint crashes there is a problem on the json. There was a hidden "space" character in one of the nodes. It was parsed correctly on the server but not on the client.
I've parsed way bigger arrays with jquery.
My first guess is that there's an error in your json.
You have many ways to find it (sublime text could highlight the error but some time, it's a bit long). Try to paste it in a web tool like http://www.jsoneditoronline.org/. and use any of the buttons (to format or to send to the right view). It'll tell you where the error is.
so, I am working on a website for a client, a friend of mine.
He sells geckos, and he has made a website for himself, and his sales partner, and I am doing a lot of javascript work for him, IE AJAX, etc... Well, I got to the available lizard page for him, and I am making a sort of dynamic gecko selection system. The way this script is supposed to work is, fetch a JSON file (here) which is perfectly good JSON, and then parse the values in to multiselects. I'm using the jQuery.get function to do this. All goes well until I try parsing the JSON data, and the browser, firefox, throws the error "Syntax Error: JSON.parse", and chromium throws the error "Unexpected Token", the problem also occurred with
The error is thrown on line 219 of js.js,
jQuery.parseJSON().
the issues are in the function drawCat(data), the page this code is in use on is Here
I hope that this is a quality question, I'm really quite tired to be coding right now, it pretty late.
Actually that is not valid JSON. It's easy to check at http://jsonlint.com/. The error is that the root should be either one JSON object or an array. Now you have several JSON objects as roots.
Update: Danjah is also correct. After you fix this the problem he highlights will also cause invalid JSON. so you need to fix both.
I don't think its valid, there's a bunch of missing commas part way down the file, screenshot attached as no line numbering.
I have a simple piece of data that I'm storing on a server, as a plain string. It is kind of ridiculous, but it looks like this:
name|date|grade|description|name|date|grade|description|repeat for a long time
this string can be up to 1.4mb in size. The idea is that it's a bunch of student records, just strung together with a simple pipe delimeter. It's a very poor serialization method.
Once this massive string is pushed to the client, it is split along the pipes into student records again, using javascript.
I've been timing how long it takes to create, and split, these strings on the client side. The times are actually quite good, the slowest run I've seen on a few different machines is 0.2 seconds for 10,000 'student records', which has a final string size of ~1.4mb.
I realize this is quite bizarre, just wondering if there are any inherent problems with creating and splitting such large strings using javascript? I don't know how different browsers implement their javascript engines. I've tried this on the 'major' browsers, but don't know how this would perform on earlier versions of each.
Yeah looking for any comments on this, this is more for fun than anything else!
Thanks
String splitting for 1.4mb data is not a problem for decent machines, instead you should worry about the internet connection speed of your users. I've tried to do spell check with 800 kb dictionary (which is half of your data), main issue was loading time.
But looks like your students records data could be put in database, and might not need to load everything at loading time, So, how about do a pagination to show user records or use ajax to request to search certain user names?
If it's a really large string it may pay to continuously slice the string with 'string'.slice(from, to) to only process a smaller subset, appending all of the individual items to the end of the output with list.push() or something similar might work.
String split methods are probably the most efficient way of doing this though, even in IE. Processing individual characters using string.charAt(x) is extremely slow and will often show a security error as it stalls the browser. Using string split methods would certainly be much faster than splitting using regular expressions.
It may also be possible to encode the data using a JSON array, some newer browsers such as IE8/Webkit/FF3.5 have fast JSON parsing built in using JSON.parse(data). But using eval(JSON) may overflow the browser if there's enough data, so is probably a bad idea. It may pay to compare for performance though.
A much better approach in a lot of cases is to use AJAX and only load some of the data at once from the server, which would also save download time.
Besides S. Mark's excellent comments about local vs. x-fer speed and the tip to re-encode using AJAX, I suggest a (longterm) move away from JavaScript in the Browser (assuming that's were it runs) to either a non-browser implementation of JS (or possibly another language).
A browser based JS seems a week link in a data-x-fer chain and nothing I would want to run unmonitored, since the browsers are upgraded from time to time and breaking your JS-x-fer might be an unanticipates side effect!
A more generic questions to start. Is there a limit to the response size of an ajax request, if it is a JSON request?
I am passing large amounts of data through a JSON request and running into a 'script stack quota is exhausted' message in FF3. Now in FF2 the quota was 4mb but in FF3 it is 640kb. I am wondering if this is JSON specific somehow. Do normal ajax requests have a response size limit? One that may be imposed by the browser? If a non-JSON request doesn't have these same issues with script stack quota, how could I categorize the data coming back? XML perhaps...Im not sure if I would be within the bounds of the w3c spec with my data to do so.
iirc this was a bug on FF3 last year but I believe (yes, checked it here) it's fixed. Looking down the comments though, there's this note:
Note: this test is dependent upon
architecture and available memory. On
a x86_64 machine with 2G and a 64bit
build, it will fail with
InternalError: script stack space
quota is exhausted however on a x86_64
with 4G and a 64bit build it will
pass.
The comments also read that this is a pure JS problem, which means although the data format will strictly not matter, very large chunks of JSON might blow the JS stack where XML strings might not. I think you just have to try.
OTOH, it is marked fixed, so there's a question of making sure you're on the latest version of FF too.
I suspect the limits are different if you are sending vs receiving data, so I am going to assume it is around sending data to the client. JSON is just a data type, really. What you are really doing, is suspect, is making a GET request for a javascript script which should be limited to a reasonable size. The wiki for JSON also says use the XMLHTTPRequest method, which might get around your limit, but you would still need a proxy to avoid cross-domain scripting limitations and use a more sensible mime-type, like html, xml, binary, etc. If you are putting any images in the JSON remember that they can be links as there are no cross-domain issues with those requests.
Double check as well that it is not the number of requests causing you trouble, browsers have limits there too. Sometimes as low as 2.
I think annakata is right.
The text of the error message also suggest that the problem is occurring due to the depth of your json structure, not the KB size of it.
What it means is that when you eval your json, JavaScript engine uses a stack during parsing the json. This stack is hitting its maximum limit due to the depth (number of nested elements) in your json structure.
You might want to check whether somewhat flatter structure is feasible for your requirements.
As a general rule I try to keep my AJAX data small. If I have to pass a large amount of data I will retrieve it with multiple calls. So if I am loading a table, I will have one method that will tell me how many records are going to be returned, and another method to return me the records in groups of # (usually 20 for me).
The good part about doing this is that I can load the page as I retrieve data, and the user is not waiting for one large payload.
Also, it would be better to use JSON rather than XML. JSON is usually a smaller payload than XML, and many tests have show that it is easier for the Browser to load it in.
I haven't encountered any tangible limit, but your user interactivity, break up the large data into multiple calls. Large tables take forever to get transferred through ajax, especially if the user is running IE. Large data + Ajax + IE = IE crash.