Passing array from javascript to ASP.NET - javascript

I have a simple 2D array in javascript. I want to pass this array to a ASP.NET page.
I wanted to know what would be the best option, going with JSON or XML.
The metric is speed and size of the data. In some cases, the array may be long in size.
Thank You.

The metric is speed and size of the data.
JSON is faster then XML in terms of speed. It's smaller then XML in terms of size.
XML is bloated to allow you to represent and validate structures.
However there are various BSON formats around where people take JSON and then hand optimise the storage format excessively. (BSON is binary JSON)
Some BSON spec I picked from google
Bison, A JavaScript parser for some arbitary BSON format.
Now if you really have bottlenecks with transferring data (which you probably don't) you may want to use WebSockets to send data over TCP rather then HTTP, thus reducing the amount of traffic and data you send.
Of course you only care about that if you making say X000 requests per second.

JSON Should be your best bet XML datatype sending might be a big pain as sometimes you would have to add in new configs just to support XML datatype to be sent as form data to the server. Genreally it is not a recommended practice due to security concerns

Related

Effective send big array of coordinates using Socket.io

I was testing performance of my node.js server and I detected that this small code last too much time:
socket.emit("a",{a1:something,a2:veryBigArray});
Problem is that my array is very big and socket.io must encode it to JSON.
My array is something like:
veryBigArray=[{x:0,y:0},{x:0,y:1},{x:1,y:1},{x:1,y:2}.......];
I am interested mainly in server performance. I want server can continue in work as soon as possible.
Before sending array to client, I generate array, so I don't have problem to completely change structure of sended data. Maybe array could be somehow compressed. I read about ArrayBuffer;
What is the best (fastest for sever) way of sending big array of coordinates to client (browser) using Socket.io?
If you must encode the array to JSON then I don't think an ArrayBuffer will help. However, if you know the precise data structure of the array to be sent and can predict it server-side, you can come up with your own efficient encoding/decoding schema.
In your example, your data is simply an array of x and y value pairs where each value is an integer. An extremely simple but possibly fruitful approach would be to strip the data of the predictable (unnecessary) x and y keys and encode it as a simple CSV (comma-separated value) string.
For example:
[{x:0,y:0},{x:0,y:1},{x:1,y:1},{x:1,y:2}]
would encode as the string:
0,0,0,1,1,1,1,2
The most efficient approach however would probably be to abandon socket.io for your own custom websocket interface that can somehow send binary data directly instead of encoding it as JSON. JSON will inherently be radically inefficient for sending a large dataset compared to sending encoded binary.
Edit: It looks like socket.io can send binary data so I would explore that along with some kind of efficient encoding/decoding scheme tailored to your dataset.

What is the most efficient way in JavaScript to parse huge amounts of data from a file

What is the most efficient way in JavaScript to parse huge amounts of data from a file?
Currently I use JSON parse to serialize an uncompressed 250MB file, which is really slow. Is there a simple and fast way to read a lot of data in JavaScript from a file without looping through every character? The data stored in the file are only a few floating point arrays?
UPDATE:
The file contains a 3d mesh, 6 buffers (vert, uv etc). Also the buffers need to be presented as typed arrays. streaming is not a option because the file has to be fully loaded before the graphics engine can continue. Maybe a better question is how to transfer huge typed arrays from a file to javascript in the most efficient way.
I would recommend a SAX based parser for these kind of JavaScript or a stream parser.
DOM parsing would load the whole thing in memory and this is not the way to go by for large files like you mentioned.
For Javascript based SAX Parsing (in XML) you might refer to
https://code.google.com/p/jssaxparser/
and
for JSON you might write your own, the following link demonstrates how to write a basic SAX based parser in Javascript
http://ajaxian.com/archives/javascript-sax-based-parser
Have you tried encoding it to a binary and transferring it as a blob?
https://developer.mozilla.org/en-US/docs/DOM/XMLHttpRequest/Sending_and_Receiving_Binary_Data
http://www.htmlgoodies.com/html5/tutorials/working-with-binary-files-using-the-javascript-filereader-.html#fbid=LLhCrL0KEb6
There isn't a really good way of doing that, because the whole file is going to be loaded into memory and we all know that all of them have big memory leaks. Can you not instead add some paging for viewing the contents of that file?
Check if there are any plugins that allow you to read the file as a stream, that will improve this greatly.
UPDATE
http://www.html5rocks.com/en/tutorials/file/dndfiles/
You might want to read about the new HTML5 API's to read local files. You will have the issue with downloading 250mb of data still tho.
I can think of 1 solution and 1 hack
SOLUTION:
Extending the split the data in chunks: it boils down to http protocol. REST parts on the notion that http has enough "language" for most client-server scenarios.
You can setup on the client a request header Content-len to establish how much data you need per request
Then on the backend have some options http://httpstatus.es
Reply a 413 if the server is simply unable to get that much data from the db
417 if the server is able to reply but not under the requested header (Content-len)
206 with the provided chunk, letting know the client "there is more from where that came from"
HACK:
Use Websocket and get the binary file. Then use the html5 FileAPI to load it into memory.
This is likely to fail though because its not the download causing the problem, but the parsing of an almost-endless JS object
You're out of luck on the browser. Not only do you have to download the file, but you'll have to parse the json regardless. Parse it on the server, break it into smaller chunks, store that data into the db, and query for what you need.

Json compression for transfer

I was wondering what the current state of javascript based json compression is. Are there any libraries currently available that allow compressing json, either by replacing long names with single characters, or some other method?
Someone has implemented HPack in Javascript, which could really improve JSON data set sizes, assuming your data set is homogeneous.
Since your emphasis is on transfer, rather than storage, don't forget to use things like gzip and to minimise your JSON. Those should be the first steps before adding yet more compression overhead.

Parsing "Streaming" JSON

I have a grid in a browser.
I want to send rows of data to the grid via JSON, but the browser should continuously parse the JSON as it receives it and add rows to the grid as they are parsed. Put another way, the rows shouldn't be added to the grid all at once after the entire JSON object is received -- they should be added as they are received.
Is this possible? Particularly using jQuery, Jackson, and Spring 3 MVC?
Does this idea have a name? I only see bits of this idea sparsely documented online.
You can use Oboe.js which was built exactly for this use case.
Oboe.js is an open source Javascript library for loading JSON using streaming, combining the convenience of DOM with the speed and fluidity of SAX.
It can parse any JSON as a stream, is small enough to be a micro-library, doesn’t have dependencies, and doesn’t care which other libraries you need it to speak to.
You can't parse incomplete or invalid JSON using the browser's JSON.parse. If you are streaming text, it will invariably try and parse invalid JSON at some point which will cause it to fail. There exists streaming JSON parsers out there, you might be able to find something to suit your needs.
Easiest way in your case would remain to send complete JSON documents for each row.
Lazy.js is able to parse "streaming" JSON (demo).
Check out SignalR.
http://www.hanselman.com/blog/AsynchronousScalableWebApplicationsWithRealtimePersistentLongrunningConnectionsWithSignalR.aspx
March 2017 update:
Websockets allow you to mantain an open connection to the server that you can use to stream the data to the table. You could encode individual rows as JSON objects and send them, and each time one is received you can append it to the table. This is perhaps the optimal way to do it, but it requires using websockets which might not be fully supported by your technology stack. If websockets is not an option, then you can try requesting the data in the smallest chunks the server will allow, but this is costly since every request has an overhead and you would end up doing multiple requests to get the data.
AFAIK there is no way to start parsing an http request before it's finished, and there is no way to parse a JSON string partially. Also, getting the data is orders of magnitude slower than processing it so it's not really worth doing.
If you need to parse large amounts of data your best bet is to do the streaming approach.

ajax request browser limit

A more generic questions to start. Is there a limit to the response size of an ajax request, if it is a JSON request?
I am passing large amounts of data through a JSON request and running into a 'script stack quota is exhausted' message in FF3. Now in FF2 the quota was 4mb but in FF3 it is 640kb. I am wondering if this is JSON specific somehow. Do normal ajax requests have a response size limit? One that may be imposed by the browser? If a non-JSON request doesn't have these same issues with script stack quota, how could I categorize the data coming back? XML perhaps...Im not sure if I would be within the bounds of the w3c spec with my data to do so.
iirc this was a bug on FF3 last year but I believe (yes, checked it here) it's fixed. Looking down the comments though, there's this note:
Note: this test is dependent upon
architecture and available memory. On
a x86_64 machine with 2G and a 64bit
build, it will fail with
InternalError: script stack space
quota is exhausted however on a x86_64
with 4G and a 64bit build it will
pass.
The comments also read that this is a pure JS problem, which means although the data format will strictly not matter, very large chunks of JSON might blow the JS stack where XML strings might not. I think you just have to try.
OTOH, it is marked fixed, so there's a question of making sure you're on the latest version of FF too.
I suspect the limits are different if you are sending vs receiving data, so I am going to assume it is around sending data to the client. JSON is just a data type, really. What you are really doing, is suspect, is making a GET request for a javascript script which should be limited to a reasonable size. The wiki for JSON also says use the XMLHTTPRequest method, which might get around your limit, but you would still need a proxy to avoid cross-domain scripting limitations and use a more sensible mime-type, like html, xml, binary, etc. If you are putting any images in the JSON remember that they can be links as there are no cross-domain issues with those requests.
Double check as well that it is not the number of requests causing you trouble, browsers have limits there too. Sometimes as low as 2.
I think annakata is right.
The text of the error message also suggest that the problem is occurring due to the depth of your json structure, not the KB size of it.
What it means is that when you eval your json, JavaScript engine uses a stack during parsing the json. This stack is hitting its maximum limit due to the depth (number of nested elements) in your json structure.
You might want to check whether somewhat flatter structure is feasible for your requirements.
As a general rule I try to keep my AJAX data small. If I have to pass a large amount of data I will retrieve it with multiple calls. So if I am loading a table, I will have one method that will tell me how many records are going to be returned, and another method to return me the records in groups of # (usually 20 for me).
The good part about doing this is that I can load the page as I retrieve data, and the user is not waiting for one large payload.
Also, it would be better to use JSON rather than XML. JSON is usually a smaller payload than XML, and many tests have show that it is easier for the Browser to load it in.
I haven't encountered any tangible limit, but your user interactivity, break up the large data into multiple calls. Large tables take forever to get transferred through ajax, especially if the user is running IE. Large data + Ajax + IE = IE crash.

Categories

Resources