I have following Enum in my project
public enum CameraAccessMethod
{
Manual = 0,
Panasonic = 1,
Axis = 2,
AirCam = 3
}
I have an object that is serialized either to json or to XML depending upon different scenarios and one of object's property is of type CameraAccessMethod. The problem i have is that when this property is serialized to XML it will give string representation of enum values (Manual, Panasonic,Axis,Aircam) but in JSON it is serialized to number values (0,1,2,3). How can i avoid this inconsistency? i want strings in JSON serialization as well.
Since Web API RC you can get string representations of enums by applying a StringEnumConvert to the existing JsonMediaTypeFormatter converter collection during Application_Start():
var jsonFormatter = GlobalConfiguration.Configuration.Formatters.JsonFormatter;
var enumConverter = new Newtonsoft.Json.Converters.StringEnumConverter();
jsonFormatter.SerializerSettings.Converters.Add(enumConverter);
You can accomplish this easily if you switch to using a formatter based upon Json.NET (which will ship out of the box with next drop of ASP.NET Web API). See this SO post for details:
How to tell Json.Net globally to apply the StringEnumConverter to all enums
To use JsonMediaTypeFormatter and enumConverter both we can use below code.
//code start here
var serializerSettings = GlobalConfiguration.Configuration.Formatters.JsonFormatter.SerializerSettings;
var enumConverter = new Newtonsoft.Json.Converters.StringEnumConverter();
serializerSettings.Converters.Add(enumConverter);
GlobalConfiguration.Configuration.Formatters.Clear();
GlobalConfiguration.Configuration.Formatters.Add(new PartialJsonMediaTypeFormatter()
{
IgnoreCase = true,
SerializerSettings = serializerSettings
});
I have some numbers in json which overflow the Number type, so I want it to be bigint, but how?
{"foo":[[0],[64],[89],[97]],"bar":[[2323866757078990912,144636906343245838,441695983932742154,163402272522524744],[2477006750808014916,78818525534420994],[18577623609266200],[9008333127155712]]}
TLDR;
You may employ JSON.parse() reviver parameter
Detailed Solution
To control JSON.parse() behavior that way, you can make use of the second parameter of JSON.parse (reviver) - the function that pre-processes key-value pairs (and may potentially pass desired values to BigInt()).
Yet, the values recognized as numbers will still be coerced (the credit for pinpointing this issue goes to #YohanesGultom).
To get around this, you may enquote your big numbers (to turn them into strings) in your source JSON string, so that their values are preserved upon converting to bigint.
As long as you wish to convert to bigint only certain numbers, you would need to pick up appropriate criteria (e.g. to check whether the value exceeds Number.MAX_SAFE_INTEGER with Number.isSafeInteger(), as #PeterSeliger has suggested).
Thus, your problem may be solved with something, like this:
// source JSON string
const input = `{"foo":[[0],[64],[89],[97]],"bar":[[2323866757078990912,144636906343245838,441695983932742154,163402272522524744],[2477006750808014916,78818525534420994],[18577623609266200],[9008333127155712]]}`
// function that implements desired criteria
// to separate *big numbers* from *small* ones
//
// (works for input parameter num of type number/string)
const isBigNumber = num => !Number.isSafeInteger(+num)
// function that enquotes *big numbers* matching
// desired criteria into double quotes inside
// JSON string
//
// (function checking for *big numbers* may be
// passed as a second parameter for flexibility)
const enquoteBigNumber = (jsonString, bigNumChecker) =>
jsonString
.replaceAll(
/([:\s\[,]*)(\d+)([\s,\]]*)/g,
(matchingSubstr, prefix, bigNum, suffix) =>
bigNumChecker(bigNum)
? `${prefix}"${bigNum}"${suffix}`
: matchingSubstr
)
// parser that turns matching *big numbers* in
// source JSON string to bigint
const parseWithBigInt = (jsonString, bigNumChecker) =>
JSON.parse(
enquoteBigNumber(jsonString, bigNumChecker),
(key, value) =>
!isNaN(value) && bigNumChecker(value)
? BigInt(value)
: value
)
// resulting output
const output = parseWithBigInt(input, isBigNumber)
console.log("output.foo[1][0]: \n", output.foo[1][0], `(type: ${typeof output.foo[1][0]})`)
console.log("output.bar[0][0]: \n", output.bar[0][0].toString(), `(type: ${typeof output.bar[0][0]})`)
.as-console-wrapper{min-height: 100% !important;}
Note: you may find RegExp pattern to match strings of digits among JSON values not quite robust, so feel free to come up with yours (as mine was the quickest I managed to pick off the top of my head for demo purposes)
Note: you may still opt in for some library, as it was suggested by #YohanesGultom, yet adding 10k to your client bundle or 37k to your server-side dependencies (possibly, to docker image size) for that sole purpose may not be quite reasonable.
// Creating an ArrayBuffer with a size in bytes
var buffer = new ArrayBuffer(16);
// Creating views
var view1 = new DataView(buffer);
// Putting 1 in slot 0
view1.setInt8(0, null);
console.log(view1.getInt8(0));
Result:
0
Expected:
null
How to set null/empty data? Do we have a way to check null data in arraybuffer?
Eg: We have a csv file with data like this:
0,,1,0
Thank you so much
From the MDN ArrayBuffer docs (emphasis mine):
The ArrayBuffer object is used to represent a generic, fixed-length
raw binary data buffer.
I.e ArrayBuffers hold binary (Number) values only. For this reason, the DataView API will only let you set float or integer values. null, however, is not a Number. It's one of JS's primitive values.
You can further see this in the EcmaScript specification where in step 4 of the abstract SetValueInBuffer operation you have, "
Assert: Type(value) is Number." The spec does not define how to handle non-Number types, however. One could argue that a TypeError should be thrown in this case, but all the implementations I checked (Chrome, Safari, Firefox, Node.js) quietly cast the value to zero... which is what you're seeing. You'll get the same behavior if you pass a String, Date, RegEx, Boolean, or undefined.
(If you pass a BigInt or Symbol, however, you appear to get a TypeError... weird.)
Context: I have been building an application that creates and uses magnet links. I've been trying to find an efficient way to transfer a javascript object in the Querystring so that On the other side I can deserialize it into an object keeping the same types. with efficient i mean using as little characters as possible/transferring as much data as possible. I've found my application has a max of +-1500 characters in url.
At first I used original Querystring npm packages but these can change types on deserialize and also very inefficient on deeper objects.
eg:
var input = { age: 12, name:'piet'};
var qs = querystring.encode(input); // ?age=12&name=piet
var output querystring.decode(qs); // {age: '12', name: 'piet'
Then I've tried using json stringifying with and without base64 for Querystrings. But this left me most of the time with much bigger strings for simple objects.
var input = { age: 12, name:'piet'};
var qs = encodeURIComponent(JSON.stringify(input)); // "%7B%22age%22%3A12%2C%22name%22%3A%22piet%22%7D"
But this leaves me with rediculous long querystring because half of the characters get encoded and become 3x as long which almost doubles the length.
base64 encoding in this case is a much better solution:
var input = { age: 12, name:'piet'};
var qs = btoa(JSON.stringify(input)); // eyJhZ2UiOjEyLCJuYW1lIjoicGlldCJ9
I've been trying to Google for an efficient algorithm it but haven't really found a good solution. I've been looking into msgPack binary serialisation by then I would also have to base64 which probably ends with a longer string.
Is there a known more efficient algorithm for object to Querystring serialisation with static types? or would i have to create my own?
I've been thinking on on a simple query string algorithm that works as follows:
Query string order is imporotant, for next point:
Keys starts with . shows depth: ?obj&.property="test" = { obj : {property: "test" }}
First character in string defines its type: b=boolean, s=string,n=number, (etc if needed)
This would lead to much more efficient query string i think. but am i not building something that has already been made before?
Surely sticking the stringified object into a single URI value should work?
var test = { string: 'string', integer: 8, intarray: [1,2,3] };
encodeURIComponent(JSON.stringify(test))
// "%7B%22string%22%3A%22string%22%2C%22integer%22%3A8%2C%22intarray%22%3A%5B1%2C2%2C3%5D%7D"
JSON.parse(decodeURIComponent("%7B%22string%22%3A%22string%22%2C%22integer%22%3A8%2C%22intarray%22%3A%5B1%2C2%2C3%5D%7D")
// {string: "string", integer: 8, intarray: [1, 2, 3]}
The object parsed at the end has the same types as the object inputted at the start.
Just stick your object into a single key:
var url = 'http://example.com/query?key=' + encodeURIComponent(JSON.stringify(object));
Right? And on the server you just parse that single value into an object.
I have a huge numpy array of floats, say ~1500*2500px and I want to
convert this array to a list (like javascript) (e.g. [[0.1,0.3,0.2],[0.1,0.3,0.2]])
serialize it to a string for a POST request to a server.
I don't know how to do (1). For (2) I took a look at numpy.array_str(), array2string() and array_repr() functions and they return representations of the array but not the full array.
How should I do this?
I'm not sure why you want it "this array to [look?] like a JavaScript array, so I am presuming (as I am at liberty to do in the absence of information to the contrary) that you wish to communicate the array to some unfortunate front-end process: almost four million elements is still a significant amount of data to squirt across network pipes. So, as always, some background to the problem would be helpful (and you can edit your question to provide it).
Assuming you want to serialize the data for transmission or storage then the simplest way to render it as a string comprehensible to JavaScript (I didn't rally know what "[look?] like" meant) is using the json standard library. Since this can't natively encode anything but list and dicts of ints, floats, truth values and strings, you are still faced with the problem of how best to represent the matrix as a list of lists.
Small example, but you have to accept this is a random shot in the dark. First
let's create a manageable data set to work with:
a = np.random.randn(4, 5)
This cannot be directly represented in JSON:
import json
try:
json.dumps(a)
except Exception as e:
print "Exception", e
resulting in the rather verbose (it's probably just calling the object's repr) but comprehensible and true message
Exception array([[ 1.24064541, 0.97989932, -0.8469167 , -0.27318908, 1.21954134],
[-1.30172725, 0.41261504, 1.39895842, 0.75260258, -1.34749298],
[-0.38415007, -0.56925321, -1.59202204, 1.29900292, 1.91357277],
[ 1.06254537, 2.75700739, -0.66371951, 1.36906192, -0.3973517 ]]) is not JSON serializable
If we ask the interpreter to convert the array to a list it does a half-hearted job, converting it into a list of array objects:
list(a)
shows as its result
[array([ 1.24064541, 0.97989932, -0.8469167 , -0.27318908, 1.21954134]),
array([-1.30172725, 0.41261504, 1.39895842, 0.75260258, -1.34749298]),
array([-0.38415007, -0.56925321, -1.59202204, 1.29900292, 1.91357277]),
array([ 1.06254537, 2.75700739, -0.66371951, 1.36906192, -0.3973517 ])]
Using the same function to convert those arrays into lists yields a usable list of lists:
list(list(r) for r in a)
evaluating to
[[1.2406454087805279,
0.97989932000522928,
-0.84691669720415574,
-0.27318907894171163,
1.219541337120247],
[-1.3017272505660062,
0.41261503624079976,
1.3989584188044133,
0.75260257672408482,
-1.3474929807527067],
[-0.38415007296182629,
-0.56925320938196644,
-1.5920220380072485,
1.2990029230603588,
1.9135727724853433],
[1.0625453748520415,
2.7570073901625185,
-0.66371950666590918,
1.3690619178580901,
-0.39735169991907082]]
This is eminently convertible to JSON, which I do here by converting it into a string:
json.dumps(list(list(r) for r in a))
which gives the (string) result
'[[1.2406454087805279, 0.97989932000522928, -0.84691669720415574, -0.27318907894171163, 1.219541337120247], [-1.3017272505660062, 0.41261503624079976, 1.3989584188044133, 0.75260257672408482, -1.3474929807527067], [-0.38415007296182629, -0.56925320938196644, -1.5920220380072485, 1.2990029230603588, 1.9135727724853433], [1.0625453748520415, 2.7570073901625185, -0.66371950666590918, 1.3690619178580901, -0.39735169991907082]]'
You can check that the result is correct by reconstituting the list of lists and comparing it with the array (since one of the arguments is a numpy array, the comparison is done elementwise):
s = json.dumps(list(list(r) for r in a))
lofls = json.loads(s)
lofls == a
array([[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, True, True, True, True]], dtype=bool)
Did I understand your question correctly?
the following method is intuitive.
np.ndarray -> list -> json_str -> list -> np.array(list_obj)
code sample:
import json
import numpy as np
np_arr = np.array([[1, 2], [7, 8]])
json_str = json.dumps(np_arr.tolist())
arr = json.loads(json_str)
restore_np = np.array(arr)
# True
print((np_arr == restore_np).all())
You can convert it to a normal python list and then to a string
arr = np.random.rand((10,10))
final_string = str(arr.tolist())
resulting in
[[0.7998950511604668, 0.3504357174428122, 0.4516363276829708, 0.42090556177992977], [0.5151195486975273, 0.7101183117731774, 0.9530575343271824, 0.39869760958795464], [0.20318293100519536, 0.17244659329654555, 0.3530236209359401, 0.2081303162461341], [0.9186758779272243, 0.9300730012004015, 0.14121513893149895, 0.39315493832613735]]