C# decimal to C++ float and javascript number - javascript

I have a tcp server written in c#. I have to write two client (c++ and javascript). I can deserialize decimal (16byte - 128bit) in c# client but I can't deserialize other languages.
Decimals not too big, I can use float or double.
When serialize decimal:
MemoryStream combinedMessage = new MemoryStream();
decimal d = 2135102.06m;
using (BinaryWriter writer = new BinaryWriter(combinedMessage, encoding))
{
writer.Write(d);
}
byte[] message = combinedMessage.ToArray();
Serialized as:
62 232 185 12 0 0 0 0 0 0 0 0 0 0 2 0
How I can deserialize decimal from byte[] in c++ and javascript?

The first 12 bytes are a little-endian 96-bit integer, byte 13 and 14 are unused (for now), byte 15 contains the scale (power of 10 to divide by), and byte 16 contains the sign bit in the MSB (other bits unused). The main difficulty lies in accurate conversion -- even if the decimal is "not too big", converting it to a float or Number can be done in ways that lose more or less accuracy.
The following routine isn't necessarily the most accurate way to convert decimals, nor the fastest, but if you are not overly concerned with either accuracy or speed it'll get the job done, and it has the benefit of being easy to translate to most any C-like language. Here it is in JavaScript:
var b = [ 62, 232, 185, 12, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0 ];
var d = 0.0;
for (var i = 11; i >= 0; --i) {
var k = b[i];
for (var j = 0; j != 8; ++j) {
d *= 2;
d += (k & 0x80) >> 7;
k <<= 1;
}
}
var scale = b[14];
d /= Math.pow(10, scale);
if (b[15] >= 0x80) d = -d;
This is almost valid C# already; all you need to change is Math.Pow and byte[] b = { 62 ... }. For C (and by extension C++) the changes aren't much more complicated:
#include <math.h>
unsigned char b[] = { 62, 232, 185, 12, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0 };
double d = 0.0;
for (int i = 11; i >= 0; --i) {
unsigned char k = b[i];
for (int j = 0; j != 8; ++j) {
d *= 2;
d += (k & 0x80) >> 7;
k <<= 1;
}
}
int scale = b[14];
d /= pow(10, scale);
if (b[15] >= 0x80) d = -d;

Related

Using bitwise operators with large numbers in javascript [duplicate]

This question already has answers here:
Bitshift in javascript
(4 answers)
Closed 3 years ago.
I am writing a Javascript version of this Microsoft string decoding algorithm and its failing on large numbers. This seems to be because of sizing (int / long) issues. If I step through the code in C# I see that the JS implementation fails on this line
n |= (b & 31) << k;
This happens when the values are (and the C# result is 240518168576)
(39 & 31) << 35
If I play around with these values in C# I can replicate the JS issue if b is an int. And If I set b to be long it works correctly.
So then I checked the max size of a JS number, and compared it to the C# long result
240518168576 < Number.MAX_SAFE_INTEGER = true
So.. I can see that there is some kind of number size issue happening but do not know how to force JS to treat this number as a long.
Full JS code:
private getPointsFromEncodedString(encodedLine: string): number[][] {
const EncodingString = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_-";
var points: number[][] = [];
if (!encodedLine) {
return points;
}
var index = 0;
var xsum = 0;
var ysum = 0;
while (index < encodedLine.length) {
var n = 0;
var k = 0;
debugger;
while (true) {
if (index >= encodedLine.length) {
return points;
}
var b = EncodingString.indexOf(encodedLine[index++]);
if (b == -1) {
return points;
}
n |= (b & 31) << k;
k += 5;
if (b < 32) {
break;
}
}
var diagonal = ((Math.sqrt(8 * n + 5) - 1) / 2);
n -= diagonal * (diagonal + 1) / 2;
var ny = n;
var nx = diagonal - ny;
nx = (nx >> 1) ^ -(nx & 1);
ny = (ny >> 1) ^ -(ny & 1);
xsum += nx;
ysum += ny;
points.push([ysum * 0.000001, xsum * 0.000001]);
}
console.log(points);
return points;
}
Expected input output:
Encoded string
qkoo7v4q-lmB0471BiuuNmo30B
Decoded points:
35.89431, -110.72522
35.89393, -110.72578
35.89374, -110.72606
35.89337, -110.72662
Bitwise operators treat their operands as a sequence of 32 bits
(zeroes and ones), rather than as decimal, hexadecimal, or octal
numbers. For example, the decimal number nine has a binary
representation of 1001. Bitwise operators perform their operations on
such binary representations, but they return standard JavaScript
numerical values.
(39 & 31) << 35 tries to shift 35 bits when there only 32
Bitwise Operators
To solve this problem you could use BigInt to perform those operations and then downcast it back to Number
Number((39n & 31n) << 35n)
You can try this:
function getPointsFromEncodedString(encodedLine) {
const EncodingString = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_-";
var points = [];
if (!encodedLine) {
return points;
}
var index = 0;
var xsum = 0;
var ysum = 0;
while (index < encodedLine.length) {
var n = 0n;
var k = 0n;
while (true) {
if (index >= encodedLine.length) {
return points;
}
var b = EncodingString.indexOf(encodedLine[index++]);
if (b === -1) {
return points;
}
n |= (b & 31n) << k;
k += 5n;
if (b < 32n) {
break;
}
}
var diagonal = ((Math.sqrt(8 * Number(n) + 5) - 1) / 2);
n -= diagonal * (diagonal + 1) / 2;
var ny = n;
var nx = diagonal - ny;
nx = (nx >> 1) ^ -(nx & 1);
ny = (ny >> 1) ^ -(ny & 1);
xsum += Number(nx);
ysum += Number(ny);
points.push([ysum * 0.000001, xsum * 0.000001]);
}
console.log(points);
return points;
}

How to build CRC32 table for Ogg?

From this answer I adapted the code below:
function _makeCRCTable() {
const CRCTable = new Uint32Array(256);
for (let i = 256; i--;) {
let char = i;
for (let j = 8; j--;) {
char = char & 1 ? 3988292384 ^ char >>> 1 : char >>> 1;
}
CRCTable[i] = char;
}
return CRCTable;
}
This code generates table as here, but for Ogg I need another table - as here.
From Ogg documentation:
32 bit CRC value (direct algorithm, initial val and final XOR = 0,
generator polynomial=0x04c11db7)
parseInt('04c11db7', 16)
return 79764919 - I tried this polynomial but resulting table is not correct.
I am new to the CRC field, as I found there are a few variations of CRC32 algorithm.
I'm not sure of javascript precedence, but the xor needs to occur after the shift:
char = char & 1 ? 3988292384 ^ (char >>> 1) : char >>> 1;
However the first table you show seems correct, as table[128] = table[0x80] = 3988292384 = 0xEDB88320 which is 0x104c11db7 bit reversed, then shifted right one bit.
The second table you have is for a left shifting CRC, where table[1] = x04c11db7. In this case the inner loop would include something like this:
let char = i << 24;
for (let j = 8; j--;) {
char = char & 0x80000000 ? 0x04c11db7 ^ char << 1 : char << 1;
}
Example C code for comparison, generates crc for the patterns {0x01}, {0x01,0x00}, {0x01,0x00,0x00}, {0x01,0x00,0x00,0x00}.
#include <stdio.h>
typedef unsigned char uint8_t;
typedef unsigned int uint32_t;
uint32_t crctbl[256];
void gentbl(void)
{
uint32_t crc;
uint32_t b;
uint32_t c;
uint32_t i;
for(c = 0; c < 0x100; c++){
crc = c<<24;
for(i = 0; i < 8; i++){
b = crc>>31;
crc <<= 1;
crc ^= (0 - b) & 0x04c11db7;
}
crctbl[c] = crc;
}
}
uint32_t crc32(uint8_t * bfr, size_t size)
{
uint32_t crc = 0;
while(size--)
crc = (crc << 8) ^ crctbl[(crc >> 24)^*bfr++];
return(crc);
}
int main(int argc, char** argv)
{
uint32_t crc;
uint8_t bfr[4] = {0x01,0x00,0x00,0x00};
gentbl();
crc = crc32(bfr, 1); /* 0x04c11db7 */
printf("%08x\n", crc);
crc = crc32(bfr, 2); /* 0xd219c1dc */
printf("%08x\n", crc);
crc = crc32(bfr, 3); /* 0x01d8ac87 */
printf("%08x\n", crc);
crc = crc32(bfr, 4); /* 0xdc6d9ab7 */
printf("%08x\n", crc);
return(0);
}
For JS:
function _makeCRC32Table() {
const polynomial = 79764919;
const mask = 2147483648;
const CRCTable = new Uint32Array(256);
for (let i = 256; i--;) {
let char = i << 24;
for (let j = 8; j--;) {
char = char & mask ? polynomial ^ char << 1 : char << 1;
}
CRCTable[i] = char;
}
return CRCTable;
}
How to use this table:
[1, 0].reduce((crc, byte) => crc << 8 >>> 0 ^ CRCTable[crc >>> 24 ^ byte], 0) >>> 0
Here we added >>> 0 that takes the module of the number - because there is no unsigned int in JS - JavaScript doesn't have integers. It only has double precision floating-point numbers.
Note that for Ogg you must set generated CRC in the reverse order.

Kadane's algorithm explained

Could someone take me through what is happening here in Kadane's algorithm? Wanted to check my understanding. here's how I see it.
you are looping through the array, and each time you set the ans variable to the largest value seen, until that value becomes negative, then ans becomes zero.
At the same time, the sum variable is overwritten each time through the loop, to the max between previously seen sums or the largest 'ans' so far. Once the loop is finished executing you will have the largest sum or answer seen so far!
var sumArray = function(array) {
var ans = 0;
var sum = 0;
//loop through the array.
for (var i = 0; i < array.length; i++) {
//this is to make sure that the sum is not negative.
ans = Math.max(0, ans + array[i]);
//set the sum to be overwritten if something greater appears.
sum = Math.max(sum, ans)
}
return sum;
};
Consider tracing the values:
var maximumSubArray = function(array) {
var ans = 0;
var sum = 0;
console.log(ans, sum);
for (var i = 0; i < array.length; i++) {
ans = Math.max(0, ans + array[i]);
sum = Math.max(sum, ans);
console.log(ans, sum, array[i]);
}
console.log(ans, sum);
return sum;
};
maximumSubArray([-2, 1, -3, 4, -1, 2, 1, -5, 4]);
Prints:
0 0
0 0 -2
1 1 1
0 1 -3
4 4 4
3 4 -1
5 5 2
6 6 1
1 6 -5
5 6 4
5 6
The first column is ans, which is the sum of the current subarray. The second is sum, representing the sum of the greatest seen so far. The third is the element that was just visited. You can see that the contiguous subarray with the largest sum is 4, −1, 2, 1, with sum 6.
The example is from Wikipedia.
The following is a translation of the code given in Wikipedia under the paragraph: "A variation of the problem that does not allow zero-length subarrays to be returned, in the case that the entire array consists of negative numbers, can be solved with the following code:"
[EDIT: Small bug fixed in the code below]
var maximumSubArray = function(array) {
var ans = array[0];
var sum = array[0];
console.log(ans, sum);
for (var i = 1; i < array.length; i++) {
ans = Math.max(array[i], ans + array[i]);
sum = Math.max(sum, ans);
console.log(ans, sum, array[i]);
}
console.log(ans, sum);
return sum;
};
See that:
> maximumSubArray([-10, -11, -12])
-10 -10
-10 -10 -11
-10 -10 -12
-10 -10
-10
The last number is the expected result. The others are as in the previous example.
This will take care of both situations mixed array and all negative number array.
var maximumSubArray = function(arr) {
var max_cur=arr[0], max_global = arr[0];
for (var i = 1; i < arr.length; i++) {
max_cur = Math.max(arr[i], max_cur + arr[i]);
max_global = Math.max(max_cur, max_global);
}
return max_global;
};
console.log(maximumSubArray([-2, 1, -3, 4, -1, 2, 1, -5, 4]));
console.log(maximumSubArray([-10, -11, -12]));
look at this link, it gives a clear explanation for Kadane's algorithm.
Basically you have to look for all positive contiguous segments of the array and also keep track of the maximum sum contiguous segment until the end. Whenever you find a new positive contiguous segment, it checks if the current sum is greater than the max_sum so far and updates that accordingly.
The following code handles the case when all the numbers are negative.
int maxSubArray(int a[], int size)
{
int max_so_far = a[0], i;
int curr_max = a[0];
for (i = 1; i < size; i++)
{
curr_max = max(a[i], curr_max+a[i]);
max_so_far = max(max_so_far, curr_max);
}
return max_so_far;
}
I have done enhacement to Kadane's Algorithm for all negative number in an array as well.
int maximumSubSum(int[] array){
int currMax =0;
int maxSum = 0;
//To handle All negative numbers
int max = array[0];
boolean flag = true;
for (int i = 0; i < array.length; i++) {
//To handle All negative numbers to get at least one positive number
if(array[i]<0)
max= Math.max(max , array[i]);
else
flag = false;
currMax = Math.max(0, currMax + array[i]);
maxSum = Math.max(maxSum , currMax);
}
return flag?max:sum;
}
Test Case:
-30 -20 -10
-10
-10 -20 -30
-10
-2 -3 4 -1 -2 1 5 -3
7
import java.io.*;
import java.util.*;
class Main
{
public static void main (String[] args)
{
Scanner sc=new Scanner(System.in);
int n=sc.nextInt(); //size
int a[]=new int[n]; //array of size n
int i;
for(i=0;i<n;i++)
{
a[i]=sc.nextInt(); //array input
}
System.out.println("Largest Sum Contiguous Subarray using Kadane’s Algorithm"+Sum(a));
}
static int Sum(int a[])
{
int max = Integer.MIN_VALUE, max_ending = 0;
for (int i = 0; i < size; i++)
{
max_ending_here = max_ending + a[i];
if (max < max_ending)
max = max_ending; //updating value of max
if (max_ending < 0)
max_ending= 0;
}
return max;
}
}
I would prefer a more functional way in JavaScript:
const maximumSubArray = function(array) {
return array.reduce(([acc, ans], x, i) => {
ans = Math.max(0, ans + x);
return [Math.max(acc, ans), ans];
}, [array[0],array[0]])[0];
};
cl(maximumSubArray([-2, 1, -3, 4, -1, 2, 1, -5, 4])); // 6

JavaScript binary operators

Working on a WebGL project and I am looking over code from a class example. In one of the loops this code was given:
var c = (((i & 0x8) == 0) ^ ((j & 0x8) == 0));
The variables i and j go up to a certain value in a for loop. What does this statement mean? Does this make sure that the variable c is in hexadecimal form?
var texSize = 64;
var image1 = new Array();
for (var i =0; i<texSize; i++)
image1[i] = new Array();
for (var i =0; i<texSize; i++)
for ( var j = 0; j < texSize; j++)
image1[i][j] = new Float32Array(4);
for (var i =0; i<texSize; i++)
for (var j=0; j<texSize; j++)
{
var c = (((i & 0x8) == 0) ^ ((j & 0x8) == 0));
image1[i][j] = [c, c, c, 1];
}
SMchrohan's answer is correct. The code is basically making a checkerboard texture.
The ?? & 0x8 means it that expression will be true when bit 3 (0,1,2,3) is true. bit 3 in binary is true every other set of 8 values (0-7 it's false, 8-15 it's true, 16-23 it's false, etc).
Then the code takes the opposite of that with == 0.
It does it for both i and j.
The ^ means exclusive-or which is true when both parts are the different (true, false or false, true) and false when they are both the same (false, false, or true, true). Because ^ is a bitwise operator both values are first converted to integers so false becomes 0 and true becomes 1. The 2 int values then have their bits exclusive-ored so
0 ^ 0 = 0
1 ^ 0 = 1
0 ^ 1 = 1
1 ^ 1 = 0
that means each entry in image1 is either [0, 0, 0, 1] or [1, 1, 1, 1]
here's some code to plot it
var texSize = 64;
var image1 = new Array();
for (var i =0; i<texSize; i++)
image1[i] = new Array();
for (var i =0; i<texSize; i++)
for ( var j = 0; j < texSize; j++)
image1[i][j] = new Float32Array(4);
for (var i =0; i<texSize; i++)
for (var j=0; j<texSize; j++)
{
var c = (((i & 0x8) == 0) ^ ((j & 0x8) == 0));
image1[i][j] = [c, c, c, 1];
}
// lets plot it
var ctx = document.createElement("canvas").getContext("2d");
document.body.appendChild(ctx.canvas);
ctx.canvas.width = texSize;
ctx.canvas.height = texSize;
for (var i =0; i<texSize; i++)
for (var j=0; j<texSize; j++)
{
var c = image1[i][j][0]
ctx.fillStyle = c ? "red" : "yellow";
ctx.fillRect(i, j, 1, 1);
}
canvas { border: 1px solid black; }
<body></body>
Note that the code doesn't appear to make much sense. It says texSize so it seems to be making a texture but it's making one Float32Array per pixel (the line that says)
image1[i][j] = new Float32Array(4);
and then it's replacing each of those individual Float32Arrays with a JavaScript native array on this line
image1[i][j] = [c, c, c, 1];
Which makes the Float32Array line useless.
On top of that I have no idea what an array or arrays of 1 pixel Float32Arrays is good for. You can't upload it like that to WebGL.
Normally I'd make one Uint8Array for the entire texture like this
var texSize = 64;
var pixels = new Uint8Array(texSize * texSize * 4);
for (var i =0; i<texSize; i++) {
for (var j=0; j<texSize; j++) {
var c = (((i & 0x8) == 0) ^ ((j & 0x8) == 0));
var p = c ? 255 : 0;
var offset = (i * texSize + j) * 4;
pixels[offset + 0] = p; // red
pixels[offset + 1] = p; // green
pixels[offset + 2] = p; // blue
pixels[offset + 3] = 255;// alpha
}
}
Or I'd use the 2D canvas API to make the texture
without some context though I don't know what the final purpose of the code is.
& is bitwise and.
^ is bitwise xor.
0x8 is the hex expression of the integer 8.
c will be 1 if either i or j BUT NOT BOTH have a 1 in their 4th bit - that is, a bitwise and with 0x8 (binary 1000) returns 0.
To walk through this a little more:
i & 0x8 will return either 0 (if the value of i has a 0 in bit 4) or 8 (if it has a 1 in that position).
(i & 0x8) == 0 will be either true or false.
(((i & 0x8) == 0) ^ ((j & 0x8) == 0)) will be 1 if either ((i & 0x8) == 0) or ((j & 0x8) == 0) is true, or 0 if both are false OR both are true.

How to divide number into integer pieces that are each a multiple of n?

Had a hard time coming up with a concise title for this. I'm sure there are terms for what I want to accomplish and there is no doubt a common algorithm to accomplish what I'm after - I just don't know about them yet.
I need to break up a number into n pieces that are each a multiple of 50. The number is itself a multiple of 50. Here is an example:
Divide 5,000 by 3 and end up with three numbers that are each multiples of 50:
1,650
1,700
1,650
I also would like to have the numbers distributed so that they flip back and forth, here is an example with more numbers to illustrate this:
Divide 5,000 by 7 and end up with 7 numbers that are each multiples of 50:
700
750
700
750
700
700
700
Note that in the above example I'm not worried that the extra 50 is not centered in the series, that is I don't need to have something like this:
700
700
750 <--- note the '50s' are centered
700
750 <--- note the '50s' are centered
700
700
Hopefully I've asked this clearly enough that you understand what I want to accomplish.
Update: Here is the function I'll be using.
var number = 5000;
var n = 7;
var multiple = 50;
var values = getIntDividedIntoMultiple(number, n, multiple)
function getIntDividedIntoMultiple(dividend, divisor, multiple)
{
var values = [];
while (dividend> 0 && divisor > 0)
{
var a = Math.round(dividend/ divisor / multiple) * multiple;
dividend -= a;
divisor--;
values.push(a);
}
return values;
}
var number = 5000;
var n = 7;
var values = [];
while (number > 0 && n > 0) {
var a = Math.floor(number / n / 50) * 50;
number -= a;
n--;
values.push(a);
} // 700 700 700 700 700 750 750
Edit
You can alternate Math.floor and Math.ceil to obtain the desired result:
while (number > 0 && n > 0) {
if (a%2 == 0)
a = Math.floor(number / n / 50) * 50;
else
a = Math.ceil(number / n / 50) * 50;
number -= a;
n--;
values.push(a);
} // 700 750 700 750 700 700 700
// i - an integer multiple of k
// k - an integer
// n - a valid array length
// returns an array of length n containing integer multiples of k
// such that the elements sum to i and the array is sorted,
// contains the minimum number of unique elements necessary to
// satisfy the first condition, the elements chosen are the
// closest together that satisfy the first condition.
function f(i, k, n) {
var minNumber = (((i / k) / n) | 0) * k;
var maxNumber = minNumber + k;
var numMax = (i - (minNumber * n)) / k;
var nums = [];
for (var i = 0; i < n - numMax; ++i) {
nums[i] = minNumber;
}
for (var i = n - numMax; i < n; ++i) {
nums[i] = maxNumber;
}
return nums;
}
So your second example would be
f(5000, 50, 7)
which yields
[700,700,700,700,700,750,750]
Let a be your starting number, k - number of parts you want to divide to.
Suppose, that b = a/n.
Now you want to divide b into k close integer parts.
Take k numbers, each equal to b/k (integer division).
Add 1 to first b%k numbers.
Multiply each number by n.
Example:
a = 5000, n = 50, k = 7.
b = 100
Starting series {14, 14, 14, 14, 14, 14, 14}
Add 1 to first 2 integers {15, 15, 14, 14, 14, 14, 14}.
Multiply by 50 {750, 750, 700, 700, 700, 700, 700}.
Your problem is the same as dividing a number X into N integer pieces that are all within 1 of each other (just multiply everything by 50 after you've found the result). Doing this is easy - set all N numbers to Floor(X/N), then add 1 to X mod N of them.
I see your problem as basically trying to divide a sum of money into near-equal bundles of bills of a certain denomination.
For example, dividing 10,000 dollars into 7 near-equal bundles of 50-dollar bills.
function getBundles(sum, denomination, count, shuffle)
{
var p = Math.floor(sum / denomination);
var q = Math.floor(p / count);
var r = p - q * count;
console.log(r + " of " + ((q + 1) * denomination)
+ " and " + (count - r) + " of " + (q * denomination));
var b = new Array(count);
for (var i = 0; i < count; i++) {
b[i] = (r > 0 && (!shuffle || Math.random() < .5 || count - i == r)
? (--r, q + 1) : q)
* denomination;
}
return b;
}
// Divide 10,000 dollars into 7 near-equal bundles of 50-dollar bills
var bundles = getBundles(10000, 50, 7, true);
console.log("bundles: " + bundles);
Output:
4 of 1450 and 3 of 1400
bundles: 1400,1450,1450,1400,1450,1400,1450
If the last argument shuffle is true, it distributes the extra amount randomly between the bundles.
Here's my take:
public static void main(String[] args) {
System.out.println(toList(divide(50, 5000, 3)));
System.out.println(toList(divide(50, 5000, 7)));
System.out.println(toList(divide(33, 6600, 7)));
}
private static ArrayList<Integer> toList(int[] args) {
ArrayList<Integer> list = new ArrayList<Integer>(args.length);
for (int i : args)
list.add(i);
return list;
}
public static int[] divide(int N, int multiplyOfN, int partsCount) {
if (N <= 0 || multiplyOfN <= N || multiplyOfN % N != 0)
throw new IllegalArgumentException("Invalid args");
int factor = multiplyOfN / N;
if (partsCount > factor)
throw new IllegalArgumentException("Invalid args");
int parts[] = new int[partsCount];
int remainingAdjustments = factor % partsCount;
int base = ((multiplyOfN / partsCount) / N) * N;
for (int i = 0; i < partsCount; i ++) {
parts[i] = (i % 2 == 1 && remainingAdjustments-- > 0) ? base + N : base;
}
return parts;
}
My algorithm provides even distribution of remainder across parts:
function splitValue(value, parts, multiplicity)
{
var result = [];
var currentSum = 0;
for (var i = 0; i < parts; i++)
{
result[i] = Math.round(value * (i + 1) / parts / multiplicity) * multiplicity - currentSum;
currentSum += result[i];
}
return result;
}
For value = 5000, parts = 7, multiplicity = 50 it returns
[ 700, 750, 700, 700, 700, 750, 700 ]

Categories

Resources