Difference between Float32Array and Int32Array - javascript

What's the difference between this:
var buffer = new ArrayBuffer(4);
var view = new Float32Array(buffer);
view[0] = 1;
and this
var buffer = new ArrayBuffer(4);
var view = new Int32Array(buffer);
view[0] = 1;
I'm wondering about the difference between Float32Array and Int32Array. Do they translate 1 differently into binary representation?

Yes they do. Assuming little-endianness, the binary representation of an element of Int32Array set to the value 1 will have 31 zero bits followed by 1 one bit.
Whereas the corresponding element of a Float32Array has the bit pattern 00111111100000000000000000000000. (The first bit is the sign, the following 8 the exponent and the final bits are the significand).

A Float32Array represents the values as 32-bit float numbers (that is, decimal numbers), while Int32Array represents them as 32-bit signed integers.
This example demostrates the differences:
const floatArray = new Float32Array(1);
const intArray = new Int32Array(1);
floatArray[0] = 1.5;
intArray[0] = 1.5;
console.log(floatArray[0]); // = 1.5
console.log(intArray[0]); // = 1

Related

Generate a 256 bit random number

I need to generate a 256 bits random unsigned number as a decimal string (with nearly zero probability of collision with anything generated earlier).
How to do this in JavaScript (in a browser)?
You can use the new JavaScript tools if you are not very limited:
function rnd256() {
const bytes = new Uint8Array(32);
// load cryptographically random bytes into array
window.crypto.getRandomValues(bytes);
// convert byte array to hexademical representation
const bytesHex = bytes.reduce((o, v) => o + ('00' + v.toString(16)).slice(-2), '');
// convert hexademical value to a decimal string
return BigInt('0x' + bytesHex).toString(10);
}
console.log( rnd256() );
This code uses a loop to generate a 256 character long string of random binary digits, then converts it to a BigInt. You can then convert it to a string if you like, or whatever else you please.
var temp = '0b';
for (let i = 0; i < 256; i++) {
temp += Math.round(Math.random());
}
const randomNum = BigInt(temp);
console.log(randomNum.toString());

Using Atomics and Float32Array in JavaScript

The Atomics.store/load methods (and others? didn't look) do not support Float32Array.
I read that this is to be consistent with the fact that it also doesn't support Float64Array for compatibility reasons (some computers don't support it).
Aside from the fact that I think this is stupid, does this also mean I must cast every float I want to use into an unsigned int?
Not only will this result in ugly code, it will also make it slower.
E.g.:
let a = new Float32Array(1); // Want the result here
Atomics.store(a, 0, 0.5); // Oops, can't use Float32Array
let b = new Float32Array(1); // Want the result here
let uint = new Uint32Array(1);
let float = new Float32Array(uint.buffer);
float[0] = 0.5;
Atomics.store(b, 0, uint[0]);
As you discovered, the Atomics methods doesn't support floating point values as argument:
Atomics.store(typedArray, index, value)
typedArray
A shared integer typed array. One of Int8Array, Uint8Array, Int16Array, Uint16Array, Int32Array,
or Uint32Array.
You can can read the IEEE754 representation as integer from the underlying buffer as you do in the example code you posted
var buffer = new ArrayBuffer(4); // common buffer
var float32 = new Float32Array(buffer); // floating point
var uint32 = new Uint32Array(buffer); // IEEE754 representation
float32[0] = 0.5;
console.log("0x" + uint32[0].toString(16));
uint32[0] = 0x3f000000; /// IEEE754 32-bit representation of 0.5
console.log(float32[0]);
or you can use fixed numbers if the accuracy isn't important. The accuracy is of course determined by the magnitude.
Scale up when storing:
Atomics.store(a, 0, Math.round(0.5 * 100)); // 0.5 -> 50 (max two decimals with 100)
read back and scale down:
value = Atomics.load(a, 0) * 0.01; // 50 -> 0.5
The other answer didn't help me much and it took awhile for me to figure out a solution, but here's how I solved the same issue:
var data = new SharedArrayBuffer(LEN * 8);
var data_float = new Float32Array(data);
var data_int = new Uint32Array(data);
data_float[0] = 2.3; //some pre-existing data
var tmp = new ArrayBuffer(8);
var tmp_float = new Float32Array(tmp);
var tmp_int = new Uint32Array(tmp);
tmp_int[0] = Atomics.load(data_int, 0);
tmp_float[0] += 1.1; //some math
Atomics.store(data_int, 0, tmp_int[0]);
console.log(data_float[0]);

JavaScript calculate hashcode from real number and integer number

Hi there I need function to calculate unique integer number from number (real number double precision) and integer.
Try explain I am developing GIS application in javascript and I am working with complex vector object like polygon (array of points object with two coordinate in ring) and lines array of points. I need fast algorithm to recognize that element has been changed it must be really fast because my vector object is collection of thousand points . In C# I am calculating hash code from coordinate using bitwise operation XOR.
But javascript convert all operands in bitwise operation to integer but i need convert double precision to integer before apply bitwise in c# way (binnary). In reflector i see this that c# calculate hash code fro double like this and I need this function in javascript as fast as can be.
public override unsafe int GetHashCode() //from System.Double
{
double num = this;
if (num == 0.0)
{
return 0;
}
long num2 = *((long*) &num);
return (((int) num2) ^ ((int) (num2 >> 32)));
}
Example:
var rotation = function (n) {
n = (n >> 1) | ((n & 0x001) << 31);
return n;
}
var x: number = 1;
var y: number = 5;
var hash = x ^ rotation(y); // result is -2147483645
var x1: number = 1.1;
var y1: number = 5;
var hash1 = x1 ^ rotation(y1); // result is -2147483645
Example result is not correct hash == hash1
Example 2: Using to string there is correct result but calculate Hash from string is to complicate and I thing is not fast enough.
var rotation = function (n) {
n = (n >> 1) | ((n & 0x001) << 31);
return n;
}
var GetHashCodeString = function(str: string): number {
var hash = 0, i, l, ch;
if (str.length == 0) return hash;
for (i = 0, l = str.length; i < l; i++) {
ch = str.charCodeAt(i);
hash = ((hash << 5) - hash) + ch;
hash |= 0; // Convert to 32bit integer
}
return hash;
}
var x: number = 1;
var y: number = 5;
var hash = GetHashCodeString(x.toString()) ^ rotation(GetHashCodeString(y.toString()));
//result is -2147483605
var x1: number = 1.1;
var y1: number = 5;
var hash1 = GetHashCodeString(x1.toString()) ^ rotation(GetHashCodeString(y1.toString()));
//result is -2147435090
Example2 result is correct hash != hash1
Is there some faster way than converting number to string than calculate hash from each character? Because my object is very large and it will take lot of time and operation in this way ...
I try do it using TypedArrays but yet I am not successful.
Thanks very much for your help
Hi there I tried use TypedArrays to calculate Hash code from number and the result is interesting. In IE the performance 4x better in Chrome 2x in FireFox this approach is equal to string version ...
var GetHashCodeNumber = function (n: number): number {
//create 8 byte array buffer number in js is 64bit
var arr = new ArrayBuffer(8);
//create view to array buffer
var dv = new DataView(arr);
//set number to buffer as 64 bit float
dv.setFloat64(0, n);
//now get first 32 bit from array and convert it to integer
// from offset 0
var c = dv.getInt32(0);
//now get next 32 bit from array and convert it to integer
//from offset 4
var d = dv.getInt32(4);
//XOR first end second integer numbers
return c ^ d;
}
I think this can be useful for someone
EDIT: using one buffer and DataView is faster !
Here is a faster way to do this in JavaScript.
const kBuf = new ArrayBuffer(8);
const kBufAsF64 = new Float64Array(kBuf);
const kBufAsI32 = new Int32Array(kBuf);
function hashNumber(n) {
// Remove this `if` if you want 0 and -0 to hash to different values.
if (~~n === n) {
return ~~n;
}
kBufAsF64[0] = n;
return kBufAsI32[0] ^ kBufAsI32[1];
}
It's 250x faster than the DataView approach: see benchmark.
I looked up some hashing libraries to see how they did it: xxhashjs, jshashes, etc.
Most seem to take a string or an ArrayBuffer, and also depend on UINT32-like functionality. This is equivalent to you needing a binary representation of the double (from your C# example). Notably I did not find any solution that included more-strange types, other than in another (unanswered) question.
His solution uses a method proposed here, which converts it to various typed arrays. This is most likely what you want, and the fastest accurate solution (I think).
I highly recommend that you structure your code to traverse objects/arrays as desired, and also benchmark the solution to see how comparable it is to your existing methods (the non-working one and the string one).

Convert Float32Array to Int16Array

I'm looking to convert a Float32Array into an Int16Array.
Here's what I have (i'm not providing data).
var data = ...; /*new Float32Array();*/
var dataAsInt16Array = new Int16Array(data.length);
for(var i=0; i<data.length; i++){
dataAsInt16Array[i] = parseInt(data[i]*32767,10);
}
I'm not convinced that I'm doing it correctly and looking for some direction.
You can do it directly from the ArrayBuffer
var dataAsInt16Array = new Int16Array(data.buffer);
var f32 = new Float32Array(4);
f32[0] = 0.1, f32[1] = 0.2, f32[2] = 0.3, f32[3] = 0.4;
// [0.10000000149011612, 0.20000000298023224, 0.30000001192092896, 0.4000000059604645]
var i16 = new Int16Array(f32.buffer);
// [-13107, 15820, -13107, 15948, -26214, 16025, -13107, 16076]
// and back again
new Float32Array(i16.buffer);
// [0.10000000149011612, 0.20000000298023224, 0.30000001192092896, 0.4000000059604645]
If you're after converting the raw underlying data you can use the approach Paul S. is describing in his answer.
But be aware of that you will not get the same numbers as you are dealing with 32-bit IEEE 754 representation of the number in the case of Float32. When a new view such as Int16 is used you are looking at the binary representation of that, not the original number.
If you are after the number you will have to convert manually, just modify your code to:
var data = ...; /*new Float32Array();*/
var len = data.length, i = 0;
var dataAsInt16Array = new Int16Array(len);
while(i < len)
dataAsInt16Array[i] = convert(data[i++]);
function convert(n) {
var v = n < 0 ? n * 32768 : n * 32767; // convert in range [-32768, 32767]
return Math.max(-32768, Math.min(32768, v)); // clamp
}
var floatbuffer = audioProcEvent.inputBuffer.getChannelData(0);
var int16Buffer = new Int16Array(floatbuffer.length);
for (var i = 0, len = floatbuffer.length; i < len; i++) {
if (floatbuffer[i] < 0) {
int16Buffer[i] = 0x8000 * floatbuffer[i];
} else {
int16Buffer[i] = 0x7FFF * floatbuffer[i];
}
}
ECMAScript 2015 and onwards has TypedArray.from which converts any typed array (and indeed, any iterable) to the specified typed array format.
So converting a Float32Array to a Uint8Array is now as easy as:
const floatArray = new Float32Array()
const intArray = Int16Array.from(floatArray)
...albeit with truncation.
Combining answers from robjtede and StuS here is one for conversion and scaling of an Float32Array to Int16Array. The scaling is range 1 to -1 in Float32Array becomes 32767 and -32768 in Int16Array:
myF32Array=Float32Array.from([1,0.5,0.75,-0.5,-1])
myI16Array=Int16Array.from(myF32Array.map(x => (x>0 ? x*0x7FFF : x*0x8000)))
myNewF32Array=Float32Array.from(Float32Array.from(myI16Array).map(x=>x/0x8000))
console.log(myF32Array)
console.log(myI16Array)
console.log(myNewF32Array)
//output
> Float32Array [1, 0.5, 0.75, -0.5, -1]
> Int16Array [32767, 16383, 24575, -16384, -32768]
> Float32Array [0.999969482421875, 0.499969482421875, 0.749969482421875, -0.5, -1]
It seems that you are trying not only to convert data format, but to process original data and store it in different format.
The direct way of converting Float32Array to Int16Array is as simple as
var a = new Int16Array(myFloat32Array);
For processing data you can use the approach that you provided in the question. I'm not sure if there's a need to call parseInt.

Converting large integer to 8 byte array in JavaScript

I'm trying to convert a large number into an 8 byte array in javascript.
Here is an IMEI that I am passing in: 45035997012373300
var bytes = new Array(7);
for(var k=0;k<8;k++) {
bytes[k] = value & (255);
value = value / 256;
}
This ends up giving the byte array: 48,47,7,44,0,0,160,0. Converted back to a long, the value is 45035997012373296, which is 4 less than the correct value.
Any idea why this is and how I can fix it to serialize into the correct bytes?
Since you are converting from decimal to bytes, dividing by 256 is an operation that is pretty easily simulated by splitting up a number in a string into parts. There are two mathematical rules that we can take advantage of.
The right-most n digits of a decimal number can determine divisibility by 2^n.
10^n will always be divisible by 2^n.
Thus we can take the number and split off the right-most 8 digits to find the remainder (i.e., & 255), divide the right part by 256, and then also divide the left part of the number by 256 separately. The remainder from the left part can be shifted into the right part of the number (the right-most 8 digits) by the formula n*10^8 \ 256 = (q*256+r)*10^8 \ 256 = q*256*10^8\256 + r*10^8\256 = q*10^8 + r*5^8, where \ is integer division and q and r are quotient and remainder, respectively for n \ 256. This yields the following method to do integer division by 256 for strings of up to 23 digits (15 normal JS precision + 8 extra yielded by this method) in length:
function divide256(n)
{
if (n.length <= 8)
{
return (Math.floor(parseInt(n) / 256)).toString();
}
else
{
var top = n.substring(0, n.length - 8);
var bottom = n.substring(n.length - 8);
var topVal = Math.floor(parseInt(top) / 256);
var bottomVal = Math.floor(parseInt(bottom) / 256);
var rem = (100000000 / 256) * (parseInt(top) % 256);
bottomVal += rem;
topVal += Math.floor(bottomVal / 100000000); // shift back possible carry
bottomVal %= 100000000;
if (topVal == 0) return bottomVal.toString();
else return topVal.toString() + bottomVal.toString();
}
}
Technically this could be implemented to divide an integer of any arbitrary size by 256, simply by recursively breaking the number into 8-digit parts and handling the division of each part separately using the same method.
Here is a working implementation that calculates the correct byte array for your example number (45035997012373300): http://jsfiddle.net/kkX2U/.
[52, 47, 7, 44, 0, 0, 160, 0]
Your value and the largest JavaScript integer compared:
45035997012373300 // Yours
9007199254740992 // JavaScript's biggest integer
JavaScript cannot represent your original value exactly as an integer; that's why your script breaking it down gives you an inexact representation.
Related:
var diff = 45035997012373300 - 45035997012373298;
// 0 (not 2)
Edit: If you can express your number as a hexadecimal string:
function bytesFromHex(str,pad){
if (str.length%2) str="0"+str;
var bytes = str.match(/../g).map(function(s){
return parseInt(s,16);
});
if (pad) for (var i=bytes.length;i<pad;++i) bytes.unshift(0);
return bytes;
}
var imei = "a000002c072f34";
var bytes = bytesFromHex(imei,8);
// [0,160,0,0,44,7,47,52]
If you need the bytes ordered from least-to-most significant, throw a .reverse() on the result.
store the imei as a hex string (if you can), then parse the string in that manner, this way you can keep the precision when you build the array. I will be back with a PoC when i get home on my regular computer, if this question has not been answered.
something like:
function parseHexString(str){
for (var i=0, j=0; i<str.length; i+=2, j++){
array[j] = parseInt("0x"+str.substr(i, 2));
}
}
or close to that whatever...

Categories

Resources