Convert Float32Array to Int16Array - javascript

I'm looking to convert a Float32Array into an Int16Array.
Here's what I have (i'm not providing data).
var data = ...; /*new Float32Array();*/
var dataAsInt16Array = new Int16Array(data.length);
for(var i=0; i<data.length; i++){
dataAsInt16Array[i] = parseInt(data[i]*32767,10);
}
I'm not convinced that I'm doing it correctly and looking for some direction.

You can do it directly from the ArrayBuffer
var dataAsInt16Array = new Int16Array(data.buffer);
var f32 = new Float32Array(4);
f32[0] = 0.1, f32[1] = 0.2, f32[2] = 0.3, f32[3] = 0.4;
// [0.10000000149011612, 0.20000000298023224, 0.30000001192092896, 0.4000000059604645]
var i16 = new Int16Array(f32.buffer);
// [-13107, 15820, -13107, 15948, -26214, 16025, -13107, 16076]
// and back again
new Float32Array(i16.buffer);
// [0.10000000149011612, 0.20000000298023224, 0.30000001192092896, 0.4000000059604645]

If you're after converting the raw underlying data you can use the approach Paul S. is describing in his answer.
But be aware of that you will not get the same numbers as you are dealing with 32-bit IEEE 754 representation of the number in the case of Float32. When a new view such as Int16 is used you are looking at the binary representation of that, not the original number.
If you are after the number you will have to convert manually, just modify your code to:
var data = ...; /*new Float32Array();*/
var len = data.length, i = 0;
var dataAsInt16Array = new Int16Array(len);
while(i < len)
dataAsInt16Array[i] = convert(data[i++]);
function convert(n) {
var v = n < 0 ? n * 32768 : n * 32767; // convert in range [-32768, 32767]
return Math.max(-32768, Math.min(32768, v)); // clamp
}

var floatbuffer = audioProcEvent.inputBuffer.getChannelData(0);
var int16Buffer = new Int16Array(floatbuffer.length);
for (var i = 0, len = floatbuffer.length; i < len; i++) {
if (floatbuffer[i] < 0) {
int16Buffer[i] = 0x8000 * floatbuffer[i];
} else {
int16Buffer[i] = 0x7FFF * floatbuffer[i];
}
}

ECMAScript 2015 and onwards has TypedArray.from which converts any typed array (and indeed, any iterable) to the specified typed array format.
So converting a Float32Array to a Uint8Array is now as easy as:
const floatArray = new Float32Array()
const intArray = Int16Array.from(floatArray)
...albeit with truncation.

Combining answers from robjtede and StuS here is one for conversion and scaling of an Float32Array to Int16Array. The scaling is range 1 to -1 in Float32Array becomes 32767 and -32768 in Int16Array:
myF32Array=Float32Array.from([1,0.5,0.75,-0.5,-1])
myI16Array=Int16Array.from(myF32Array.map(x => (x>0 ? x*0x7FFF : x*0x8000)))
myNewF32Array=Float32Array.from(Float32Array.from(myI16Array).map(x=>x/0x8000))
console.log(myF32Array)
console.log(myI16Array)
console.log(myNewF32Array)
//output
> Float32Array [1, 0.5, 0.75, -0.5, -1]
> Int16Array [32767, 16383, 24575, -16384, -32768]
> Float32Array [0.999969482421875, 0.499969482421875, 0.749969482421875, -0.5, -1]

It seems that you are trying not only to convert data format, but to process original data and store it in different format.
The direct way of converting Float32Array to Int16Array is as simple as
var a = new Int16Array(myFloat32Array);
For processing data you can use the approach that you provided in the question. I'm not sure if there's a need to call parseInt.

Related

How to encode integer to Uint8Array and back to integer in JavaScript?

I need to add compression to my project and I decided to use the LZJB algorithm that is fast and the code is small. Found this library https://github.com/copy/jslzjb-k
But the API is not very nice because to decompress the file you need input buffer length (because Uint8Array is not dynamic you need to allocate some data). So I want to save the length of the input buffer as the first few bytes of Uint8Array so I can extract that value and create output Uint8Array based on that integer value.
I want the function that returns Uint8Array from integer to be generic, maybe save the length of the bytes into the first byte so you know how much data you need to extract to read the integer. I guess I need to extract those bytes and use some bit shifting to get the original number. But I'm not exactly sure how to do this.
So how can I write a generic function that converts an integer into Uint8Array that can be embedded into a bigger array and then extract that number?
Here are working functions (based on Converting javascript Integer to byte array and back)
function numberToBytes(number) {
// you can use constant number of bytes by using 8 or 4
const len = Math.ceil(Math.log2(number) / 8);
const byteArray = new Uint8Array(len);
for (let index = 0; index < byteArray.length; index++) {
const byte = number & 0xff;
byteArray[index] = byte;
number = (number - byte) / 256;
}
return byteArray;
}
function bytesToNumber(byteArray) {
let result = 0;
for (let i = byteArray.length - 1; i >= 0; i--) {
result = (result * 256) + byteArray[i];
}
return result;
}
by using const len = Math.ceil(Math.log2(number) / 8); the array have only bytes needed. If you want a fixed size you can use a constant 8 or 4.
In my case, I just saved the length of the bytes in the first byte.
General answer
These functions allow any integer (it uses BigInts internally, but can accept Number arguments) to be encoded into, and decoded from, any part of a Uint8Array. It is somewhat overkill, but I wanted to learn how to work with arbitrary-sized integers in JS.
// n can be a bigint or a number
// bs is an optional Uint8Array of sufficient size
// if unspecified, a large-enough Uint8Array will be allocated
// start (optional) is the offset
// where the length-prefixed number will be written
// returns the resulting Uint8Array
function writePrefixedNum(n, bs, start) {
start = start || 0;
let len = start+2; // start, length, and 1 byte min
for (let i=0x100n; i<n; i<<=8n, len ++) /* increment length */;
if (bs === undefined) {
bs = new Uint8Array(len);
} else if (bs.length < len) {
throw `byte array too small; ${bs.length} < ${len}`;
}
let r = BigInt(n);
for (let pos = start+1; pos < len; pos++) {
bs[pos] = Number(r & 0xffn);
r >>= 8n;
}
bs[start] = len-start-1; // write byte-count to start byte
return bs;
}
// bs must be a Uint8Array from where the number will be read
// start (optional, defaults to 0)
// is where the length-prefixed number can be found
// returns a bigint, which can be coerced to int using Number()
function readPrefixedNum(bs, start) {
start = start || 0;
let size = bs[start]; // read byte-count from start byte
let n = 0n;
if (bs.length < start+size) {
throw `byte array too small; ${bs.length} < ${start+size}`;
}
for (let pos = start+size; pos >= start+1; pos --) {
n <<= 8n;
n |= BigInt(bs[pos])
}
return n;
}
function test(n) {
const array = undefined;
const offset = 2;
let bs = writePrefixedNum(n, undefined, offset);
console.log(bs);
let result = readPrefixedNum(bs, offset);
console.log(n, result, "correct?", n == result)
}
test(0)
test(0x1020304050607080n)
test(0x0807060504030201n)
Simple 4-byte answer
This answer encodes 4-byte integers to and from Uint8Arrays.
function intToArray(i) {
return Uint8Array.of(
(i&0xff000000)>>24,
(i&0x00ff0000)>>16,
(i&0x0000ff00)>> 8,
(i&0x000000ff)>> 0);
}
function arrayToInt(bs, start) {
start = start || 0;
const bytes = bs.subarray(start, start+4);
let n = 0;
for (const byte of bytes.values()) {
n = (n<<8)|byte;
}
return n;
}
for (let v of [123, 123<<8, 123<<16, 123<<24]) {
let a = intToArray(v);
let r = arrayToInt(a, 0);
console.log(v, a, r);
}
Posting this one-liner in case it is useful to anyone who is looking to work with numbers below 2^53. This strictly uses bitwise operations and has no need for constants or values other than the input to be defined.
export const encodeUvarint = (n: number): Uint8Array => n >= 0x80
? Uint8Array.from([(n & 0x7f) | 0x80, ...encodeUvarint(n >> 7)])
: Uint8Array.from([n & 0xff]);

Javascript - parse string to long

I have a working script in python doing string to integer conversion based on specified radix using long(16):
modulus=public_key["n"]
modulusDecoded = long(public_key["n"], 16)
which prints:
8079d7ae567dd2c02dadd1068843136314fa3893fa1fb1ab331682c6a85cad62b208d66c9974bbbb15d52676fd9907efb158c284e96f5c7a4914fd927b7326c40efa14922c68402d05ff53b0e4ccda90bbee5e6c473613e836e2c79da1072e366d0d50933327e77651b6984ddbac1fdecf1fd8fa17e0f0646af662a8065bd873
and
90218878289834622370514047239437874345637539049004160177768047103383444023879266805615186962965710608753937825108429415800005684101842952518531920633990402573136677611127418094912644368840442620417414685225340199872975797295511475162170060618806831021437109054760851445152320452665575790602072479287289305203
respectively.
This looks like a Hex to decimal conversion.
I tried to have the same result in JS but parseInt() and parseFloat() produce something completely different. On top of that JavaScript seems not to like chars in input string and sometimes returns NaN.
Could anyone please provide a function / guidance how to get the same functionality as in Python script?
Numbers in JavaScript are floating point so they always lose precision after a certain digit. To have unlimited numbers one could rather use an array of numbers from 0 to 9, which has an unlimited range. To do so based on the hex string input, i do a hex to int array conversion, then I use the double dabble algorithm to convert the array to BCD. That can be printed easily:
const hexToArray = arr => arr.split("").map(n => parseInt(n,16));
const doubleDabble = arr => {
var l = arr.length;
for( var b = l * 4; b--;){
//add && leftshift
const overflow = arr.reduceRight((carry,n,i) => {
//apply the >4 +3, then leftshift
var shifted = ((i < (arr.length - l ) && n>4)?n+3:n ) << 1;
//just take the right four bits and add the eventual carry value
arr[i] = (shifted & 0b1111) | carry;
//carry on
return shifted > 0b1111;
}, 0);
// we've exceeded the current array, lets extend it:
if(overflow) arr.unshift(overflow);
}
return arr.slice(0,-l);
};
const arr = hexToArray("8079d7");
const result = doubleDabble(arr);
console.log(result.join(""));
Try it
Using the built in api parseInt, you can get upto 100 digts of accuracy on Firefox and 20 digits of accuracy on Chrome.
a = parseInt('8079d7ae567dd2c02dadd1068843136314fa3893fa1fb1ab331682c6a85cad62b208d66c9974bbbb15d52676fd9907efb158c284e96f5c7a4914fd927b7326c40efa14922c68402d05ff53b0e4ccda90bbee5e6c473613e836e2c79da1072e366d0d50933327e77651b6984ddbac1fdecf1fd8fa17e0f0646af662a8065bd873', 16)
a.toPrecision(110)
> Uncaught RangeError: toPrecision() argument must be between 1 and 21
# Chrome
a.toPrecision(20)
"9.0218878289834615508e+307"
# Firefox
a.toPrecision(100)
"9.021887828983461550807409292694387726882781812072572899692574101215517323445643340153182035092932819e+307"
From the ECMAScript Spec,
Let p be ? ToInteger(precision).
...
If p < 1 or p > 100, throw a RangeError exception.
As described in this answer, JavaScript numbers cannot represent integers larger than 9.007199254740991e+15 without loss of precision.
Working with larger integers in JavaScript requires a BigInt library or other special-purpose code, and large integers will then usually be represented as strings or arrays.
Re-using code from this answer helps to convert the hexadecimal number representation
8079d7ae567dd2c02dadd1068843136314fa3893fa1fb1ab331682c6a85cad62b208d66c9974bbbb15d52676fd9907efb158c284e96f5c7a4914fd927b7326c40efa14922c68402d05ff53b0e4ccda90bbee5e6c473613e836e2c79da1072e366d0d50933327e77651b6984ddbac1fdecf1fd8fa17e0f0646af662a8065bd873
to its decimal representation
90218878289834622370514047239437874345637539049004160177768047103383444023879266805615186962965710608753937825108429415800005684101842952518531920633990402573136677611127418094912644368840442620417414685225340199872975797295511475162170060618806831021437109054760851445152320452665575790602072479287289305203
as demonstrated in the following snippet:
function parseBigInt(bigint, base) {
//convert bigint string to array of digit values
for (var values = [], i = 0; i < bigint.length; i++) {
values[i] = parseInt(bigint.charAt(i), base);
}
return values;
}
function formatBigInt(values, base) {
//convert array of digit values to bigint string
for (var bigint = '', i = 0; i < values.length; i++) {
bigint += values[i].toString(base);
}
return bigint;
}
function convertBase(bigint, inputBase, outputBase) {
//takes a bigint string and converts to different base
var inputValues = parseBigInt(bigint, inputBase),
outputValues = [], //output array, little-endian/lsd order
remainder,
len = inputValues.length,
pos = 0,
i;
while (pos < len) { //while digits left in input array
remainder = 0; //set remainder to 0
for (i = pos; i < len; i++) {
//long integer division of input values divided by output base
//remainder is added to output array
remainder = inputValues[i] + remainder * inputBase;
inputValues[i] = Math.floor(remainder / outputBase);
remainder -= inputValues[i] * outputBase;
if (inputValues[i] == 0 && i == pos) {
pos++;
}
}
outputValues.push(remainder);
}
outputValues.reverse(); //transform to big-endian/msd order
return formatBigInt(outputValues, outputBase);
}
var largeNumber =
'8079d7ae567dd2c02dadd1068843136314fa389'+
'3fa1fb1ab331682c6a85cad62b208d66c9974bb'+
'bb15d52676fd9907efb158c284e96f5c7a4914f'+
'd927b7326c40efa14922c68402d05ff53b0e4cc'+
'da90bbee5e6c473613e836e2c79da1072e366d0'+
'd50933327e77651b6984ddbac1fdecf1fd8fa17'+
'e0f0646af662a8065bd873';
//convert largeNumber from base 16 to base 10
var largeIntDecimal = convertBase(largeNumber, 16, 10);
//show decimal result in console:
console.log(largeIntDecimal);
//check that it matches the expected output:
console.log('Matches expected:',
largeIntDecimal === '90218878289834622370514047239437874345637539049'+
'0041601777680471033834440238792668056151869629657106087539378251084294158000056'+
'8410184295251853192063399040257313667761112741809491264436884044262041741468522'+
'5340199872975797295511475162170060618806831021437109054760851445152320452665575'+
'790602072479287289305203'
);
//check that conversion and back-conversion results in the original number
console.log('Converts back:',
convertBase(convertBase(largeNumber, 16, 10), 10, 16) === largeNumber
);

Using Atomics and Float32Array in JavaScript

The Atomics.store/load methods (and others? didn't look) do not support Float32Array.
I read that this is to be consistent with the fact that it also doesn't support Float64Array for compatibility reasons (some computers don't support it).
Aside from the fact that I think this is stupid, does this also mean I must cast every float I want to use into an unsigned int?
Not only will this result in ugly code, it will also make it slower.
E.g.:
let a = new Float32Array(1); // Want the result here
Atomics.store(a, 0, 0.5); // Oops, can't use Float32Array
let b = new Float32Array(1); // Want the result here
let uint = new Uint32Array(1);
let float = new Float32Array(uint.buffer);
float[0] = 0.5;
Atomics.store(b, 0, uint[0]);
As you discovered, the Atomics methods doesn't support floating point values as argument:
Atomics.store(typedArray, index, value)
typedArray
A shared integer typed array. One of Int8Array, Uint8Array, Int16Array, Uint16Array, Int32Array,
or Uint32Array.
You can can read the IEEE754 representation as integer from the underlying buffer as you do in the example code you posted
var buffer = new ArrayBuffer(4); // common buffer
var float32 = new Float32Array(buffer); // floating point
var uint32 = new Uint32Array(buffer); // IEEE754 representation
float32[0] = 0.5;
console.log("0x" + uint32[0].toString(16));
uint32[0] = 0x3f000000; /// IEEE754 32-bit representation of 0.5
console.log(float32[0]);
or you can use fixed numbers if the accuracy isn't important. The accuracy is of course determined by the magnitude.
Scale up when storing:
Atomics.store(a, 0, Math.round(0.5 * 100)); // 0.5 -> 50 (max two decimals with 100)
read back and scale down:
value = Atomics.load(a, 0) * 0.01; // 50 -> 0.5
The other answer didn't help me much and it took awhile for me to figure out a solution, but here's how I solved the same issue:
var data = new SharedArrayBuffer(LEN * 8);
var data_float = new Float32Array(data);
var data_int = new Uint32Array(data);
data_float[0] = 2.3; //some pre-existing data
var tmp = new ArrayBuffer(8);
var tmp_float = new Float32Array(tmp);
var tmp_int = new Uint32Array(tmp);
tmp_int[0] = Atomics.load(data_int, 0);
tmp_float[0] += 1.1; //some math
Atomics.store(data_int, 0, tmp_int[0]);
console.log(data_float[0]);

C# equivalent of a javascript TypedArray created from an existing array's .buffer?

I am trying to convert a javascript function to a C# script. One of the things the javascript version does is create a Uint32Array from an existing Float32Array's .buffer
Does anyone know what the equivalent of this would be in C#? I am NOT talking about what Float32Array and Uint32Array are in C# I am talking about the way that the javascript typed array gets initialized using the buffer from the dst variable (see code)... https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray Its important because other arrays get initialized later on after this function using (for example) var dstUint32 = new Uint32Array(dst.buffer)...
This is the code (src is an existing Float32Array - see a bit of the initial values for this below)...
compileClassifier = function(src, width, scale, dst) {
width += 1;
if (!dst) dst = new Float32Array(src.length);
var dstUint32 = new Uint32Array(dst.buffer);
dstUint32[0] = src[0];
dstUint32[1] = src[1];
var dstIndex = 1;
for (var srcIndex = 1, iEnd = src.length - 1; srcIndex < iEnd; ) {
dst[++dstIndex] = src[++srcIndex];
var numComplexClassifiers = dstUint32[++dstIndex] = src[++srcIndex];
for (var j = 0, jEnd = numComplexClassifiers; j < jEnd; ++j) {
var tilted = dst[++dstIndex] = src[++srcIndex];
var numFeaturesTimes3 = dstUint32[++dstIndex] = src[++srcIndex] * 3;
if (tilted) {
for (var kEnd = dstIndex + numFeaturesTimes3; dstIndex < kEnd; ) {
dstUint32[++dstIndex] = src[++srcIndex] + src[++srcIndex] * width;
dstUint32[++dstIndex] = src[++srcIndex] * (width + 1) + ((src[++srcIndex] * (width - 1)) << 16);
dst[++dstIndex] = src[++srcIndex];
}
} else {
for (var kEnd = dstIndex + numFeaturesTimes3; dstIndex < kEnd; ) {
dstUint32[++dstIndex] = src[++srcIndex] + src[++srcIndex] * width;
dstUint32[++dstIndex] = src[++srcIndex] + ((src[++srcIndex] * width) << 16);
dst[++dstIndex] = src[++srcIndex];
}
}
var inverseClassifierThreshold = 1 / src[++srcIndex];
for (var k = 0; k < numFeaturesTimes3; ) {
dst[dstIndex - k] *= inverseClassifierThreshold;
k += 3;
}
if (inverseClassifierThreshold < 0) {
dst[dstIndex + 2] = src[++srcIndex];
dst[dstIndex + 1] = src[++srcIndex];
dstIndex += 2;
} else {
dst[++dstIndex] = src[++srcIndex];
dst[++dstIndex] = src[++srcIndex];
}
}
}
dst = dst.subarray(0, dstIndex + 1);
return dst;
}
The src variable in there is created from this (this is a very cut down version the full version is thousands of numbers long):-
var classifier = [20,20,0.8226894140243530,3,0,2,3,7,14,4,-1.,3,9,14,2,2.,4.0141958743333817e-003,0.0337941907346249,0.8378106951713562,0,2,1,2,18,4,-1.,7,2,6,4,3.,0.0151513395830989,0.1514132022857666,0.7488812208175659,0,2,1,7,15,9,-1.,1,10,15,3,3.,4.2109931819140911e-003,0.0900492817163467,0.6374819874763489,6.9566087722778320,16,0,2,5,6,2,6,-1.,5,9,2,3,2.,1.6227109590545297e-003,0.0693085864186287];
src = new Float32Array(classifier);
This is what gets logged in the console for dstUint32 and dst respectively with those numbers set as src (the function doesn't work with just those numbers since as I say the actual array is far, far longer) Its interesting that almost none of the numbers match the original array...
[20, 20, 1062378438, 3, 0, 6, 1641, 61341710, 3279494571, 2109, 30670862, 1140399531, 1024093127, 1062632131, 0, 6, 469, 61341714, 3263430756, 475, 61341702, 1128661142, 1041959952, 1061140142, 0, 6, 1639, 138018831, 3278731586, 2341, 46006287, 1144134386, 1035496386, 1059271173, 1088330890, 16, 0, 6, 1409, 92012546, 3290042412, 2111, 46006274, 1150947372, 1032712617, 4294967295, 4294967295, 0, 4294967295, 4294967295, 4294967295, 0, 4294967295, 4294967295, 4294967295, 0, 4294967295, 4294967295, 4294967295, 0, 4294967295, 4294967295, 4294967295, 0, 4294967295]
[2.802596928649634e-44, 2.802596928649634e-44, 0.822689414024353, 4.203895392974451e-45, 0, 8.407790785948902e-45, 2.2995307799570248e-42, 9.874165102541455e-37, -249.1158905029297, 2.955338461261039e-42, 7.787657921469055e-38, 498.2317810058594, 0.03379419073462486, 0.8378106951713562, 0, 8.407790785948902e-45, 6.572089797683392e-43, 9.874168689865524e-37, -66.00076293945312, 6.656167705542881e-43, 9.874157927893318e-37, 198.00228881835938, 0.1514132022857666, 0.7488812208175659, 0, 8.407790785948902e-45, 2.2967281830283752e-42, 5.597240788537616e-34, -237.47366333007812, 3.280439704984397e-42, 2.7918024463192472e-37, 712.4210205078125, 0.09004928171634674, 0.6374819874763489, 6.956608772277832, 2.2420775429197073e-44, 0, 8.407790785948902e-45, 1.9744295362336673e-42, 1.1848984491218286e-35, -616.252685546875, 2.958141058189689e-42, 2.7917995316184414e-37, 1232.50537109375, 0.06930858641862869, NaN, NaN, 0, NaN, NaN, NaN, 0, NaN, NaN, NaN, 0, NaN, NaN, NaN, 0, NaN, NaN, NaN, 0, NaN]
So you can see here that the code
dstUint32[0] = src[0];
dstUint32[1] = src[1];
appears to have set BOTH dstUint32[0]/dstUint32[1] AND dst[0]/dst[1] to something, just that in dst[0]/dst[1] the value is getting logged as 2.802596928649634e-44 rather than 20?
The equivalent of a js UInt32Array is UInt32[].
The equivalent of a js Float32Array is float[].
The equivalent of an ArrayBuffer is byte[].
JavaScript typed arrays are views onto the array buffer. Thus the element in the UInt32Array is not cast to a float, rather the contents of the physical storage - the bits - are reinterpreted as a float.
To do this in C# you need to do one of a couple of things.
If you care about sharing the arraybuffer
If you care about sharing the arraybuffer, i.e. you need change in one array to show up in the other array immediately, then you need to implement the whole thing. C# doesn't offer this out of the box. In order to achieve it you will need to use the InteropServices.Marshal facilities. These will allow you to get access to the underlying memory.
If you don't care about sharing the arraybuffer
If you just want to convert in one direction, then maybe later convert back again, but you don't need changes to be visible immediately, or to use a shared ArrayBuffer, you have an easier job.
Create a MemoryStream,
write the Float32 elements from the array
read the UInt32 elements
Something like this:
static byte[] ReinterpretAsByteArray(UInt32[] a)
{
using (MemoryStream s = new MemoryStream())
{
using (BinaryWriter w = new BinaryWriter(s, Encoding.Unicode, true))
{
for (int i = 0; i < a.Length; i++)
{
w.Write(a[i]);
}
}
return s.ToArray();
}
}
static float[] ReinterpretAsFloatArray(byte[] b) {
using (MemoryStream s = new MemoryStream(b, false)) {
using (BinaryReader r = new BinaryReader(s, Encoding.Unicode, true))
{
float[] f = new float[b.Length / 4]; // 4 = sizeof float
for (int i = 0; i < b.Length; i++)
{
f[i] = r.ReadSingle();
}
return f;
}
}
}
You will need to write a similar function to convert a byte array to a Uint32 array, and to convert a float array to a byte array, but that is a matter of changing two lines.

JavaScript calculate hashcode from real number and integer number

Hi there I need function to calculate unique integer number from number (real number double precision) and integer.
Try explain I am developing GIS application in javascript and I am working with complex vector object like polygon (array of points object with two coordinate in ring) and lines array of points. I need fast algorithm to recognize that element has been changed it must be really fast because my vector object is collection of thousand points . In C# I am calculating hash code from coordinate using bitwise operation XOR.
But javascript convert all operands in bitwise operation to integer but i need convert double precision to integer before apply bitwise in c# way (binnary). In reflector i see this that c# calculate hash code fro double like this and I need this function in javascript as fast as can be.
public override unsafe int GetHashCode() //from System.Double
{
double num = this;
if (num == 0.0)
{
return 0;
}
long num2 = *((long*) &num);
return (((int) num2) ^ ((int) (num2 >> 32)));
}
Example:
var rotation = function (n) {
n = (n >> 1) | ((n & 0x001) << 31);
return n;
}
var x: number = 1;
var y: number = 5;
var hash = x ^ rotation(y); // result is -2147483645
var x1: number = 1.1;
var y1: number = 5;
var hash1 = x1 ^ rotation(y1); // result is -2147483645
Example result is not correct hash == hash1
Example 2: Using to string there is correct result but calculate Hash from string is to complicate and I thing is not fast enough.
var rotation = function (n) {
n = (n >> 1) | ((n & 0x001) << 31);
return n;
}
var GetHashCodeString = function(str: string): number {
var hash = 0, i, l, ch;
if (str.length == 0) return hash;
for (i = 0, l = str.length; i < l; i++) {
ch = str.charCodeAt(i);
hash = ((hash << 5) - hash) + ch;
hash |= 0; // Convert to 32bit integer
}
return hash;
}
var x: number = 1;
var y: number = 5;
var hash = GetHashCodeString(x.toString()) ^ rotation(GetHashCodeString(y.toString()));
//result is -2147483605
var x1: number = 1.1;
var y1: number = 5;
var hash1 = GetHashCodeString(x1.toString()) ^ rotation(GetHashCodeString(y1.toString()));
//result is -2147435090
Example2 result is correct hash != hash1
Is there some faster way than converting number to string than calculate hash from each character? Because my object is very large and it will take lot of time and operation in this way ...
I try do it using TypedArrays but yet I am not successful.
Thanks very much for your help
Hi there I tried use TypedArrays to calculate Hash code from number and the result is interesting. In IE the performance 4x better in Chrome 2x in FireFox this approach is equal to string version ...
var GetHashCodeNumber = function (n: number): number {
//create 8 byte array buffer number in js is 64bit
var arr = new ArrayBuffer(8);
//create view to array buffer
var dv = new DataView(arr);
//set number to buffer as 64 bit float
dv.setFloat64(0, n);
//now get first 32 bit from array and convert it to integer
// from offset 0
var c = dv.getInt32(0);
//now get next 32 bit from array and convert it to integer
//from offset 4
var d = dv.getInt32(4);
//XOR first end second integer numbers
return c ^ d;
}
I think this can be useful for someone
EDIT: using one buffer and DataView is faster !
Here is a faster way to do this in JavaScript.
const kBuf = new ArrayBuffer(8);
const kBufAsF64 = new Float64Array(kBuf);
const kBufAsI32 = new Int32Array(kBuf);
function hashNumber(n) {
// Remove this `if` if you want 0 and -0 to hash to different values.
if (~~n === n) {
return ~~n;
}
kBufAsF64[0] = n;
return kBufAsI32[0] ^ kBufAsI32[1];
}
It's 250x faster than the DataView approach: see benchmark.
I looked up some hashing libraries to see how they did it: xxhashjs, jshashes, etc.
Most seem to take a string or an ArrayBuffer, and also depend on UINT32-like functionality. This is equivalent to you needing a binary representation of the double (from your C# example). Notably I did not find any solution that included more-strange types, other than in another (unanswered) question.
His solution uses a method proposed here, which converts it to various typed arrays. This is most likely what you want, and the fastest accurate solution (I think).
I highly recommend that you structure your code to traverse objects/arrays as desired, and also benchmark the solution to see how comparable it is to your existing methods (the non-working one and the string one).

Categories

Resources