JSON design best practices [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have designed a JSON representation of a mailbox so that I can look up mails easily, for example mailjson[UID].Body.
However after looking at Angularjs and Ember, templating MVC JS engines, it seems that the JSON should be of the format:
[{
"id": 1,
"body": "Blah blah blah..."
},
{
"id": 2,
"body": "More blah foo blah"
},
{
"id": 3,
"body": "Hopefully you understand this example"
}]
And then there is some findAll(id) function to grab the item based on the id one wants, that iterates through the JSON. So now I'm wondering does my JSON design have merit? Am I doing it wrong? Why don't people use the dict lookup design I'm using with my JSON?
Any other tips to make sure I have a good data structure design, I would be grateful.

Best practice for storage of big table in JSON is to use array.
The reason is that when parsing JSON array into memory array there is no speed penalty of building map. If you need to build in-memory index by several fields for quick access, you can do it during or after loading. But if you store JSON like you did, you don't have a choice to load quickly without building map because JSON parser will always have to build that huge map of ids based on your structure.
Structure of how you store data in memory does not have to be (and should not be) the same as structure of storage on disk because there is no way to serialize/deserialize internal JavaScript map structure. If it was possible then you would serialize and store indexes just like MS SQL Server stores tables and indexes.
But if framework that you are using forces you to have same structure in memory and disk then I support your choice of using id as key in one big object, because then it's easier to pass id in JSON requests to and from server to do any action with email item or update it's state in UI, assuming that both server and browser keep significant list of email items in memory.

Related

Are there benefits to re-defining JSON data as JavaScript objects in source code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm loading a JSON file from a web server via the XMLHttpRequest in JavaScript. This module, or alternatively the JSON.parse method, returns a JSON Object (a regular JavaScript object, which I call JSON Object as w3schools does).
Loaded JSON data:
{
"MyName":"Kiwi",
"MyNum" : "42",
"MyList" : {
"$type" :"Bla",
"$values: [ ]
}
};
I can pass around the parsed object and access it's properties like a regular JavaScript object. However, I'm wondering if it would make sense to actually declare the object's properties in source code such as:
// MyObject.js
function MyObject() {
this.myName = "Kiwi";
this.myNum = 42;
this.myList = []
}
And basically, map each property from my parsed JSON object to the JavaScript object declared in source code like so (plus additional transformations):
var myObj = new MyObject();
myObj.myName = jsonObject.MyName;
myObj.myNum = jsonObject.MyNum;
I would assume this has benefits such as:
Actual type information such as numbers vs strings
Potentially intellisense/auto-complete features in my IDE
Easier upgrading of data, if the JSON properties ever change
I'm comparing my approach to how JSON is parsed and turned into objects in a language such as C#, using a serializer such as JSON.NET.
Is this also common practice in the JavaScript world or should I stick to just using the JSON objects returned by the JSON.parse method?
Further info:
My special use-case is the handling of JSON data, which includes many meta fields (denoted by names such as "$type" or "$values" to indicate an array). The files were created by JSON.NET and serialize C# objects, which I basically mirror in my JavaScript application. Hence, I might want to re-declare properties more similar to the way the original C# classes were declared. Mainly this would turn calls like myObject.aList.$values[0] into myObject.aList[0].
If you need more than the data types JSON provides, you can roll something yourself to process the items that JSON.parse spits out, or use many libraries like this one
Most peoples use cases are simple enough that they won't need a library that supports more 'rich' data storage, and JSON is fine.
If you know the data structure ahead of time, and need to convert for example a JSON date (stored as a string) into a Javascript date, it's best to just convert it upon loading the JSON. If you have complex needs requiring loading complex data types that you won't be able to predict ahead of time, use a library like Transit.js

What's a good architecture for react-redux apps that require lots of arbitrary data mapping? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Let's say you're making an app that displays data about movies, which you get from the server in this format:
{ movies: [], actors: [], directors: [], years: [] }
The different arrays relate to each other via ids. In other words, an object in movies looks like this:
{title: 'The Shining', cast: [`837392010`, `363893920` ...], ...}
Each of those ids, in the cast array, is a reference to an actor in the actors array.
You put the data, as is, in the redux store, but different components need to use it in different arbitrary ways. For instance, one component needs to list all the movies Jack Nicholson was in during the 1970s, even though the years array doesn't link directly to actors. It just links to movies, which themselves link to actors. So before the component can render, it needs to run a bunch of data transforms.
Another component lists all the times Martin Scorsese and Robert DiNero have worked together, another lists all the movies with "Fire" in their titles, another lists all the movies Tom Hanks was in between 1990 and 2000...
The product manager plans to add and remove arbitrary displays like this all the time, so, the date will be used, in the app, like a queryable database. The specific queries will be changed quite often.
Let's also say that the transforms must be done on the client. The server's only capability will be to send the data in the normalized form, above.
What's the best way to structure an app like this, so that it's scalable and easy to update? I can think of a few approaches:
Do all the mapping in reducers. The downside is that they reducers will get bloated. They will need to cache the same data in a ton of different forms.
Store the raw, normalized data in the reducers, and use controller components (high-level components) to map it into whatever form their children need. I'm not crazy about this, because it litters the view with business logic. Of course, that assumes you think of high-level components as view.
Map in the connect, mapStateToProps functions. Each high-level component would connect to the store, but before it got its props, they'd be transformed, by connect, to the appropriate format.
????

Which is better when sending Javascript array to PHP script with jQuery: Using JSON or just let jQuery convert it to query string? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm using jQuery's $.ajax function with POST method to send a javascript array to a PHP script (AFAIK, GET method would be analogue):
fruits = [
{
color: "yellow",
taste: "sour"
},
{
color: "blue",
taste: "semi-sweet"
},
]
$.ajax({
url: "php/script.php",
data: {
fruits: fruits
},
type: "POST",
});
After making the request, the $_POST variable in PHP is populated with:
$_POST['fruits'][0]['color']
$_POST['fruits'][0]['taste']
$_POST['fruits'][1]['color']
$_POST['fruits'][1]['taste']
Alternatively, in $.ajax function call, I could send data as stringified JSON:
data: {
fruits: JSON.stringify(fruits);
},
In this scenario, the data would be sent as a single string (performance gain?), but I would have to use json_decode() function in PHP, on $_POST['fruits'] argument:
$fruits = json_decode($_POST['fruits']);
After using it, I would be able to access the data in a similar way:
fruits[0]['color']
fruits[0]['taste']
fruits[1]['color']
fruits[1]['taste']
I wonder, if these methods are similarly good or one of them is better and why. Thank You in advance for sharing Your knowledge :)
Better is a very fluid concept. you have to consider several factors and I will try to list some of them.
Readability:
Json format is highly readable and may map nicely to backend models
which may help you keep a better mental model.
on the other hand query string is not a far behind readability wise but looks less like an object.
Performance:
Query string content length is 127 bytes
Json content length is 138 bytes
that eleven bytes difference which may be critical on huge systems.
Intuition
Your data exchange seems fitting to json.
You send an array of objects and although it is easy to post like you showed I think it is slightly easier to debug and construct.
JSON is more an enterprise/rest (api) way to go. However it's more modern, clean and dynamic it might be overkill for you.
If you're doing a simple post with 2 or 3 strings go right ahead.
One thing to note, posting null values is only really possible with JSON if you want this, hence the more diverse, also as stated nested data is easier with json. But simple contact forms shouldn't bother with this unless you're using Angular or something.
It's most common on POST to use JSON.
Few reasons why:
querystrings needed to be urlencoded
it's rather difficult to have complex data(ex. nested objects or nested arrays even) on querystrings
JSON is native objects on client

Persist complex array in Javascript [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Hello I am having a hard time understating what I need to do in order to persist a complex array I have. The array is filled through an ajax call to a jsonresult and has a couple of strings and 2 objects, something like this:
{
Name
Age
Surname
Object 1
Object 2
}
This array is filled whenever I load my "main page". Now when I change page to lets say page B I would like to still have access to those values.
I know that its possible to use json stringify but I would like to maintain the array without needing to convert it to a string, because I would need to convert the numbers back to numbers.
Any suggestions are greatly appreciated.
The key here is Now when I change page to lets say page B I would like to still have access to those values.
The Javascript scope (that is, all the variables and functions you want to use) only lives as long as the page does. So, if you refresh, it kills the scope (all your variables will disappear)!
So, how to persist your information? As the commenters have said, you've got some options.
Traditionally, cookies are used - there's a lot of tutorials on how to do that. They store a key, a value, and an expiration.
HTML5 API has introduced browser storage, which is generally better than cookies, but less widely supported (although it's pretty good now in 2015).
You can store it on the server using a server-side language like PHP, Ruby, Java, etc. and pass it back to the page when the page is rendered.
Basically, Javascript cannot store variables by itself if the page is refreshed. You've got to use one of the above options, which are each an interesting learning curve by themselves (outside the scope of this question).
I'd recommend, in order:
starting with session storage if you're just experimenting
cookies if you want to build a resilient solution
server-side stuff if you want to take the red pill.
Incidentally, your notation is not correct for Javascript - arrays are notated using
["foo", "bar", "etc"]
and Javascript objects (which can be used as associative arrays) look like
{ "key": "value", "ghandi": "india", "mandela", "south africa" }

More bandwidth-efficient data formats than XML & JSON? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
When transferring large amounts of columnar data from web server to browser in the JSON or XML format, column names are repeated unnecessarily for every row. What I have to transfer is basically CSV format - ie. multiple rows of data, each with the same field order.
For example, in XML I'd be transferring:
<row>
<name>Frank</name>
<city>New York</city>
</row>
<row>
<name>Brian</name>
<city>Jerusalem</city>
</row>
etc..
Same with JSON, field names are repeated unnecessarily, which becomes a lot of extra bytes for no reason.
My question is, are there other standard formats, supported by libraries in Javascript and/or .NET, which I can use to more efficiently transfer this kind of "CSV-like" data set?
Unless you're talking about a LOT of data, it probably doesn't matter that much (bandwidth is cheap, or so I hear). However, if you're trying to squeeze as much data as possible into the available bandwidth, you do have options.
You could use whatever format you want (XML, JSON), then compress it. For example, you could use JSZip. Advantage: you don't have to transform your basic data structure.
Also, there's nothing to stop you from using CSV if that's format that makes the most sense for your application. Best to use a library such as jquery-csv to handle annoyances like properly quoting strings. Disadvantage: this isn't really compression; simply a format without a lot of overhead.
All things told, I would probably go with the first approach.
I guess your json format is like below:
[
{
"name": "Frank",
"city": "New York"
},
{
"name": "Brian",
"city": "Jerusalem"
}
]
Why not simply change it to:
[
["name", "city"],
["Frank", "New York"],
["Brian", "Jerusalem"]
]
I had a similar problem when transferring large amounts of data via XML from Flash (running on client, similar to JavaScript) and .NET (running on server obviously)
The method I ended up using was;
XML > string without whitespace > zip compress > base64 encoded
And then on the other side, do the reverse.
In your case, you may just do;
CSV > string > zip > base64
From memory, I was able to get some really big payloads (approx. 2mb) down to less than 50kb

Categories

Resources