Single character signing scheme (minimal security) - javascript

Note: I originally posted this to Information Security, but I'm starting to think it might be more relevant here as it's really about determining what I should do with a request rather than securing information.
Situation
System A:
I have a system A that serves requests to users. This server does something, and then redirects the user to system B. During that redirect, server A can give the user a 32-character alphanumeric string of information to pass along to system B. 31 characters of that information are needed, but one can be used as a checksum. This string can more or less be thought of as a request ID.
System B:
When system B receives a request from the user, it can verify that the request (and the ID-like string) are valid by parsing the 31-character string, querying a database, and talking to system A. This system can verify with absolute certainty that the request is valid and has not been tampered with, but it's very computationally expensive.
Attackers:
It is likely that this system will see many attempts to spoof the ID. This is filtered by later checks so I'm not worried about a single character perfectly signing the ID, but I do want to avoid spending any more resources on handling these requests than is needed.
What I Need
I am looking for a checksum/signing scheme that can, with a single character, give me a good idea of whether the request should continue to the verification process or if it should be immediately discarded as invalid. If a message is discarded, I need to be 100% sure that it isn't valid, but it's okay if I keep messages that are invalid. I believe an ideal solution would mean 1/62 invalid requests are kept (attacker has to guess the check character), but as a minimal solution discarding half of all invalid requests would be sufficient.
What I've Tried
I have looked at using the Luhn algorithm (same one that's used for credit cards), but I would like to be able to use a key to generate the character to make it more difficult for an attacker to spoof the checksum.
As a first attempt at creating a signing scheme, I am bitwise xor-ing the 31-byte id with a 31-byte key, summing the resulting bytes, converting to decimal and adding the digits together until it's less than 62, then mapping it to a character in the set [a-bA-Z0-9] (pseudocode below). The problem is that although I'm pretty sure this won't discard any valid requests, I'm not sure how to determine how often this will let through invalid IDs or if the key can be retrieved using the final value.
Set alphabet to (byte[]) "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890";
Set keystring to "aLaklj14sdLK87kxcvalskj7asfrq01";
Create empty byte[] key;
FOR each letter in keystring
Push (index of letter in alphabet) to key;
Create empty byte[] step1;
FOR each (r, k) in (request, key)
Push r XOR s to step1;
Set step2 to 0;
FOR each b in step1
Add (int) b to step2;
WHILE step2 > 62
Copy step2 to step3;
Set step2 to 0;
Convert step3 to String;
Split step3 between characters;
FOR each digit in step3
Add (int) digit to step2;
END WHILE
RETURN alphabet[step2]
Stated Formally
A deterministic hash function where, given a private key and an input 31 bytes long, yields an output in the set {x | x ∈ ℕ, x < 62}, where guessing the output would be more efficient than calculating the private key. (Bonus points for variable-length input)
This will eventually be implemented in NodeJS/JavaScript but isn't really language-dependent.
Disclaimer: I apologize if this question is too vague and theoretical. Please comment for clarification if it's needed. There are, obviously, ways I could work around this problem, but in this case, I am looking for as direct a solution as possible.

If you want a "deterministic hash function" with a private key, then I believe you can just use sha256 (or any other hash function in your crypto library) with the key appended to the input:
sha256(input+key).toString('hex');
Afterwards, take the last few bits of the hash value, convert it from hex string to integer, divide the integer by 62, get the remainder, and determine the character based on the remainder.
This won't give you perfect 1/62 distribution probability (the hex string should have a uniform distribution for each value but not the remainders after dividing by 62) for each character but should be very close.

One approach would be to create a Blob URL when user visits initial document. The Blob URL should be unique to the document which created the URL. The user can then use the Blob URL as a request identifier to server "B". When user makes request to "B" revoke the Blob URL.
The Blob URL is unique for each call to URL.createObjectURL(), the user creates the unique identifier, where the lifetime of the Blob URL is the lifetime of the document where the Blob URL is created, or the Blob URL is revoked. There is minimal opportunity for the Blob URL to be copied from the visitors' browser by any individual other than the user which created the Blob URL, unless other issues exist at the individuals' computer.
const requestA = async() => {
const blob = new Blob();
const blobURL = URL.createObjectURL(blob);
const A = await fetch("/path/to/server/A", {
method:"POST", body:JSON.stringify({id:blobURL})
});
const responseA = await A.text();
// do stuff with response
return [blobURL, responseA];
}
Server "A" communicates created Blob URL to server "B"
const requestB = async(blobURL) => {
const blob = new Blob();
const blobURL = URL.createObjectURL(blob);
const B = await fetch("/path/to/server/B", {
method:"POST", body:JSON.stringify({id:blobURL})
});
const responseB = await B.text();
return responseB
}
requestA()
.then(([blobURL, responseA] => {
// do stuff with `responseA`
console.log(responseA);
// return `requestB` with `blobURL` as parameter
return requestB(blobURL)
})
.then(responseB => console.log(responseB) // do stuff with `responseB`)
.catch(err => console.error(err));

Related

CSV string to array when there is \n in body [duplicate]

This question already has answers here:
How to parse CSV data that contains newlines in field using JavaScript
(2 answers)
Closed 10 months ago.
I'm trying to convert a CSV string into an array of array of objects. Although the issue is, there is a bunch of \n in the body from the incoming request, with are causing the request to split and mess up all the code. I'm attempting to fix this even with \n in the body
The string looks like this, all the messages that are strings from the incoming request, starts with a \" and finishes with \".
"id,urn,title,body,risk,s.0.id,s.1.id,s.2.id,a.0.id,a.1.id,a.2.id,a.3.id
302,25,\"Secure Data\",\"Banking can save a lot of time but it’s not without risks. Scammers treat your bank account as a golden target –
it can be a quick and untraceable way to get money from you\n\n**TOP TIPS**\n\n**Always read your banks rules.** These tips don’t replace your banks rules - \
in fact we fully support them. If you don’t follow their rules, you may not get your money back if you are defrauded \n\n**Saving passwords or allowing auto-complete.**
Saving passwords in your browser is great for remembering them but if a hacker is able to access your computer, they will also have access to your passwords.
When on your banking site the password box we recommend you don’t enable the auto-complete function – a hacked device means they are able to gain access using this method \n\n**Use a
PIN number on your device.** It’s really important to lock your device when you’re not using it.\",,2,20,52,1,2,3,4"
I have attempted to make it smaller since there is a bunch of content, but the string that comes is basically the above, The big string with is messing my code up start at Banking can save and finishes at not using it. I have several other datas that have the same type of body, and always comes inside \" body \", I have been attempting to perform a function to separate the content from this CSV string, into an array of array or an array of objects.
This is what I attempted:
function csv_To_Array(str, delimiter = ",") {
const header_cols = str.slice(0, str.indexOf("\n")).split(delimiter);
const row_data = str.slice(str.indexOf("\n") + 1).split("\n");
const arr = row_data.map(function (row) {
const values = row.split(delimiter);
const el = header_cols.reduce(function (object, header, index) {
object[header] = values[index];
return object;
}, {});
return el;
});
// return the array
return arr;
}
I have thought on using regex too, where I would split if it had a comma of a \n, although if there is a /" it will split when it finds the next /":
array.split(/,/\n(?!\d)/))
Try this:
csvData.replace(/(\r\n|\n|\r)/gm, "");
Once you've used that to replace the new lines, or removed them, this code will help you get started with understanding how to build an array from the new CSV string:
const splitTheArrayAndLogIt = () => {
const everySingleCharacter = csvData.split(""); // <-- this is a new array
console.log(everySingleCharacter);
const splitAtCommas = csvData.split(",");
console.log(splitAtCommas);
}

How to make JSON.parse() to treat all the Numbers as BigInt?

I have some numbers in json which overflow the Number type, so I want it to be bigint, but how?
{"foo":[[0],[64],[89],[97]],"bar":[[2323866757078990912,144636906343245838,441695983932742154,163402272522524744],[2477006750808014916,78818525534420994],[18577623609266200],[9008333127155712]]}
TLDR;
You may employ JSON.parse() reviver parameter
Detailed Solution
To control JSON.parse() behavior that way, you can make use of the second parameter of JSON.parse (reviver) - the function that pre-processes key-value pairs (and may potentially pass desired values to BigInt()).
Yet, the values recognized as numbers will still be coerced (the credit for pinpointing this issue goes to #YohanesGultom).
To get around this, you may enquote your big numbers (to turn them into strings) in your source JSON string, so that their values are preserved upon converting to bigint.
As long as you wish to convert to bigint only certain numbers, you would need to pick up appropriate criteria (e.g. to check whether the value exceeds Number.MAX_SAFE_INTEGER with Number.isSafeInteger(), as #PeterSeliger has suggested).
Thus, your problem may be solved with something, like this:
// source JSON string
const input = `{"foo":[[0],[64],[89],[97]],"bar":[[2323866757078990912,144636906343245838,441695983932742154,163402272522524744],[2477006750808014916,78818525534420994],[18577623609266200],[9008333127155712]]}`
// function that implements desired criteria
// to separate *big numbers* from *small* ones
//
// (works for input parameter num of type number/string)
const isBigNumber = num => !Number.isSafeInteger(+num)
// function that enquotes *big numbers* matching
// desired criteria into double quotes inside
// JSON string
//
// (function checking for *big numbers* may be
// passed as a second parameter for flexibility)
const enquoteBigNumber = (jsonString, bigNumChecker) =>
jsonString
.replaceAll(
/([:\s\[,]*)(\d+)([\s,\]]*)/g,
(matchingSubstr, prefix, bigNum, suffix) =>
bigNumChecker(bigNum)
? `${prefix}"${bigNum}"${suffix}`
: matchingSubstr
)
// parser that turns matching *big numbers* in
// source JSON string to bigint
const parseWithBigInt = (jsonString, bigNumChecker) =>
JSON.parse(
enquoteBigNumber(jsonString, bigNumChecker),
(key, value) =>
!isNaN(value) && bigNumChecker(value)
? BigInt(value)
: value
)
// resulting output
const output = parseWithBigInt(input, isBigNumber)
console.log("output.foo[1][0]: \n", output.foo[1][0], `(type: ${typeof output.foo[1][0]})`)
console.log("output.bar[0][0]: \n", output.bar[0][0].toString(), `(type: ${typeof output.bar[0][0]})`)
.as-console-wrapper{min-height: 100% !important;}
Note: you may find RegExp pattern to match strings of digits among JSON values not quite robust, so feel free to come up with yours (as mine was the quickest I managed to pick off the top of my head for demo purposes)
Note: you may still opt in for some library, as it was suggested by #YohanesGultom, yet adding 10k to your client bundle or 37k to your server-side dependencies (possibly, to docker image size) for that sole purpose may not be quite reasonable.

Why does the same HSM key verify to a different ethereum address every time?

I am interfacing with an HSM which is generating and signing with the ethereum standard (secp256k1). I am interfacing with the HSM using a package called Graphene. I pull out the public key using its "pointEC" attribute:
0xc87c1d67c1909ebf8b54c9ce3d8e0f0cde41561c8115481321e45b364a8f3334b6e826363d8e895110fc9ca2d75e84cc7c56b8e9fbcd70c726cb44f5506848fa
Which I can use to generate the address: 0x21d20b04719f25d2ba0c68e851bb64fa570a9465
But when I try to use the key to sign a personal message from a dApp, the signature always evaluates to a different address. For example, the nonce/message:
wAMqcOCD2KKz2n0Dfbu1nRYbeLw_qbLxrW1gpTBwkq Has the signature:
0x2413f8d2ab4df2f3d87560493f21f0dfd570dc61136c53c236731bf56a9ce02cb23692e6a5cec96c62359f6eb4080d80328a567d14387f487f3c50d9ce61503b1c
But it recovers a valid address of 0xFC0561D848b0cDE5877068D94a4d803A0a933785
This is all presumably with the same private/public key. Granted, I merely appended the "1c" recovery value, but even when I attempt with other values I have no luck. Here's a couple more examples:
Nonce: WRH_ApTkfN7yFAEpbGwU9BiE2M6eKTZMklPYK50djnx
Sig: 0x70242adabfe27c12e54abced8de87b45f511a194609eb27b215b153594b5697b7fb5e7279285663f80c82c2a2f2920916f76fd845cdecb45ace19f76b0622ac41c
Address: 0x1A086eD40FF90E75764260E2Eb42fab4Db519E53
Nonce: TZV6qhplddJgcKaN7qtpcIhudFhiQ
Sig: 0x3607beb3d58ff35ca1059f3ea44f41e79e76d8ffe35a4f716e78030f0fe2ca1da51f138c31d4ec4b9fc3546c4de1185736a4c4c7030a8b1965e30cb0af6ba2ee1c
Address: 0xa61A518cf73163Fd92461942c26C67c203bda379
My code to sign the message:
let alg: graphene.MechanismType;
alg = graphene.MechanismEnum.ECDSA;
const session = get_session();
let key: graphene.Key | null = null;
//#region Find signing key
const objects = session.find({label: GEN_KEY_LABEL});
for (let i = 0; i < objects.length; i++) {
const obj = objects.items(i);
if ((obj.class === graphene.ObjectClass.PRIVATE_KEY ||
obj.class === graphene.ObjectClass.SECRET_KEY) &&
obj.handle.toString('hex') == params.handle
) {
key = obj.toType<graphene.Key>();
break;
}
}
if (!key) {
throw new Error("Cannot find signing key");
}
var sign = session.createSign(alg,key);
if (!params.data) {
console.log("No data found. Signing 'test' string");
params.data = 'test';
}
sign.update(Buffer.from(params.data.toString().trim()));
var signature = sign.final();
console.log(signature.toString('hex'));
Keep in mind, it fails with even just 1 key present.
The address is just calculated over the public key, while the signature is generated using ECDSA. ECDSA which consists of a random value r and a signature s that is specific to that random (and, of course, the private key). More information here (Wikipedia on ECDSA).
You don't see this because they are simply encoded to a statically sized (unsigned, big integer) values and then concatenated together to be called "the signature" (hence the size of the signature being twice that of the key size, 64 bytes instead of 32 bytes). Verification will parse the signature and use the separate values again. With ethereum and BitCoin an additional byte may be prefixed to the signature so that it is possible to retrieve back the public key and then recalculate the address. This also alters the signature generation so you're not talking plain ECDSA anymore.
There is also the X9.62 signature format, which still does consists of two separate integers, encoded using ASN.1 / DER encoding. Those signatures only look partially random because of the overhead required to separate / encode the two integers.
Turns out I was using a deprecated Buffer.from function, as the updated version requires you to specify the format of the incoming data.
E.g. Buffer.from("04021a","hex")
Since it was the final 'input' and calculation, it took me forever to realize that the data was being incorrectly transformed at that point. I thought I had checked and rechecked the data in every step multiple times, but missed the most in-your-face part.
Also, I learned that to create a proper signature and prevent transaction malleability, you have to keep resigning so that the value of 's' ends up being less than:
(0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141)/2
Then when putting 'r' and 's' through a address-recovery function, it should try to recover the address with v=27 or v=28 (0x1a or 0x1b), basically at this point it's trial and error. Most of the time, it'll recover the correct address with v=27.

Converting large number to string in Javascript/Node

I have seen other related questions, but they did not solve my problem, or may be I somehow missed the exactly same resolved queries.
Here is the problem. The service that I call up returns a JSON response with some keys having large numbers as values, and I want to pass them on to my view and display. The issue is that they are getting rounded off, which I don't want. Actually its coming inside a buffer from which I am doing now:
JSON.parse(res.body.toString()) // res.body is a Buffer
and sending to view. How can I retain the whole number in the form of a string and send this to view so exactly the same is made available to UI.
I thought may be a replacer will help, but it does not works too.
const replacer = (key, value) => {
if (typeof value === 'number') {
return JSON.stringify(value);
}
return value;
};
//78787878977987787897897897123456786747398
const obj = {
name: 'John',
income: 78787878977987787897897897123456786747398,
car: null
};
var buf = Buffer.from(JSON.stringify(obj));
console.log(buf.toString());
// console.log(JSON.stringify(buf.toString()))
// console.log('func res: ', replacer('key', 78787878977987787897897897123456786747398))
// console.log(obj.income.toString())
console.log(JSON.stringify(obj, replacer));
You can recommend some external trusted library, or better, suggest me the solution through direct code only.
Edit:
The outcome in short is: Convert the response to String before returning from the server. Once it gets into JS (Buffer in my case), the conversion already occurred meaning that from the application side, nothing can be done to retrieve it.
Please let me know if there's a real solution to this without modifying server response.
Unfortunately, the number is higher than max_safe_integer, so if it ever gets parsed as a number, even if it's converted back to a string later (such as with the reviver function, the second parameter to JSON.parse), it won't be reliable. But luckily, since you have a JSON string, you can replace numeric values with string values before JSON.parseing it. For example:
const resBody = '{"foo":"bar", "objs":[{"name":"John", "income": 78787878977987787897897897123456786747398}]}';
const resBodyReplaced = resBody.replace(/: *(\d+)/g, ':"$1"');
console.log(JSON.parse(resBodyReplaced).objs[0].income);

optimize search through large js string array?

if I have a large javascript string array that has over 10,000 elements,
how do I quickly search through it?
Right now I have a javascript string array that stores the description of a job,
and I"m allowing the user to dynamic filter the returned list as they type into an input box.
So say I have an string array like so:
var descArr = {"flipping burgers", "pumping gas", "delivering mail"};
and the user wants to search for: "p"
How would I be able to search a string array that has 10000+ descriptions in it quickly?
Obviously I can't sort the description array since they're descriptions, so binary search is out. And since the user can search by "p" or "pi" or any combination of letters, this partial search means that I can't use associative arrays (i.e. searchDescArray["pumping gas"] )
to speed up the search.
Any ideas anyone?
As regular expression engines in actual browsers are going nuts in terms of speed, how about doing it that way? Instead of an array pass a gigantic string and separate the words with an identifer.
Example:
String "flipping burgers""pumping gas""delivering mail"
Regex: "([^"]*ping[^"]*)"
With the switch /g for global you get all the matches. Make sure the user does not search for your string separator.
You can even add an id into the string with something like:
String "11 flipping burgers""12 pumping gas""13 delivering mail"
Regex: "(\d+) ([^"]*ping[^"]*)"
Example: http://jsfiddle.net/RnabN/4/ (30000 strings, limit results to 100)
There's no way to speed up an initial array lookup without making some changes. You can speed up consequtive lookups by caching results and mapping them to patterns dynamically.
1.) Adjust your data format. This makes initial lookups somewhat speedier. Basically, you precache.
var data = {
a : ['Ant farm', 'Ant massage parlor'],
b : ['Bat farm', 'Bat massage parlor']
// etc
}
2.) Setup cache mechanics.
var searchFor = function(str, list, caseSensitive, reduce){
str = str.replace(/(?:^\s*|\s*$)/g, ''); // trim whitespace
var found = [];
var reg = new RegExp('^\\s?'+str, 'g' + caseSensitive ? '':'i');
var i = list.length;
while(i--){
if(reg.test(list[i])) found.push(list[i]);
reduce && list.splice(i, 1);
}
}
var lookUp = function(str, caseSensitive){
str = str.replace(/(?:^\s*|\s*$)/g, ''); // trim whitespace
if(data[str]) return cache[str];
var firstChar = caseSensitive ? str[0] : str[0].toLowerCase();
var list = data[firstChar];
if(!list) return (data[str] = []);
// we cache on data since it's already a caching object.
return (data[str] = searchFor(str, list, caseSensitive));
}
3.) Use the following script to create a precache object. I suggest you run this once and use JSON.stringify to create a static cache object. (or do this on the backend)
// we need lookUp function from above, this might take a while
var preCache = function(arr){
var chars = "abcdefghijklmnopqrstuvwxyz".split('');
var cache = {};
var i = chars.length;
while(i--){
// reduce is true, so we're destroying the original list here.
cache[chars[i]] = searchFor(chars[i], arr, false, true);
}
return cache;
}
Probably a bit more code then you expected, but optimalisation and performance doesn't come for free.
This may not be an answer for you, as I'm making some assumptions about your setup, but if you have server side code and a database, you'd be far better off making an AJAX call back to get the cut down list of results, and using a database to do the filtering (as they're very good at this sort of thing).
As well as the database benefit, you'd also benefit from not outputting this much data (10000 variables) to a web based front end - if you only return those you require, then you'll save a fair bit of bandwidth.
I can't reproduce the problem, I created a naive implementation, and most browsers do the search across 10000 15 char strings in a single digit number of milliseconds. I can't test in IE6, but I wouldn't believe it to more than 100 times slower than the fastest browsers, which would still be virtually instant.
Try it yourself: http://ebusiness.hopto.org/test/stacktest8.htm (Note that the creation time is not relevant to the issue, that is just there to get some data to work on.)
One thing you could do wrong is trying to render all results, that would be quite a huge job when the user has only entered a single letter, or a common letter combination.
I suggest trying a ready made JS function, for example the autocomplete from jQuery. It's fast and it has many options to configure.
Check out the jQuery autocomplete demo
Using a Set for large datasets (1M+) is around 3500 times faster than Array .includes()
You must use a Set if you want speed.
I just wrote a node script that needs to look up a string in a 1.3M array.
Using Array's .includes for 10K lookups:
39.27 seconds
Using Set .has for 10K lookups:
0.01084 seconds
Use a Set.

Categories

Resources