RSA Encrypt String longer than key - javascript

Hei Guys,
i'm encryping Strings with JS and a Public Key. I use the JSBN code but the problem is in creating a BigInt.
RSA.js does:
// Copyright (c) 2005 Tom Wu
var m = pkcs1pad2(text,(this.n.bitLength()+7)>>3); // getting the Bigint
if(m == null) return null; // nullcheck
var c = this.doPublic(m); // encrypting the BigInt
The problem is in "pkcs1pad2". The function has a check, if the text is longer den the BitLength of the key. If so, return, else create a BigInt.
// Copyright (c) 2005 Tom Wu
if(n < s.length + 11) { // TODO: fix for utf-8
alert("Message too long for RSA");
return null;
}
var ba = new Array();
var i = s.length - 1;
while(i >= 0 && n > 0) {
var c = s.charCodeAt(i--);
if(c < 128) { // encode using utf-8
ba[--n] = c;
} else if((c > 127) && (c < 2048)) {
ba[--n] = (c & 63) | 128;
ba[--n] = (c >> 6) | 192;
} else {
ba[--n] = (c & 63) | 128;
ba[--n] = ((c >> 6) & 63) | 128;
ba[--n] = (c >> 12) | 224;
}
}
ba[--n] = 0;
var rng = new SecureRandom();
var x = new Array();
while(n > 2) { // random non-zero pad
x[0] = 0;
while(x[0] == 0) rng.nextBytes(x);
ba[--n] = x[0];
}
ba[--n] = 2;
ba[--n] = 0;
return new BigInteger(ba);
I can't figure out, what the author means with "// TODO: fix for utf-8". Can anyone explain this? &/ give a working answer?

This tries to implement PKCS#1 v1.5 padding as defined in PKCS#1. The input of PKCS#1 v1.5 padding for encryption must be 11 bytes smaller than the size of the modulus:
M : message to be encrypted, an octet string of length mLen,
where mLen <= k - 11
If the message is larger then RSA encryption cannot continue. Usually that is not a problem: you encrypt using a symmetric cipher with a random key and then encrypt that random key using RSA. This is called a hybrid cryptosystem.
The reason why the comment is there is that JavaScript - like many scripting languages - has some trouble distinguishing between bytes and text. If s is text then s.length is likely to return the amount of characters not bytes. This is deliberate for languages that implement weak typing, but it doesn't make it easier to create and use cryptographic algorithms in JavaScript.
If you use a multi-byte encoding such as UTF-8, then characters that encode to 2 bytes will increase the total plaintext size (in bytes). So the calculation may fail at that point. Either that, or you will loose the characters that cannot be encoded in single bytes - if I take a quick look at the code that is what will likely happen if you go beyond the ASCII (7 bit, values 0..127) range of characters.

Related

Reverse Bits JavaScript

The problem is standard but the solution in JavaScript takes a lot more effort to code.
I got the solution but my answer is coming half of what is desired.
Problem Description
Reverse the bits of a 32-bit unsigned integer A.
Problem Constraints
0 <= A <= 2^32
Input Format
The first and only argument of input contains an integer A.
Output Format
Return a single unsigned integer denoting minimum xor value.
Example Input
Input 1:
0
Input 2:
3
Example Output
Output 1:
0
Output 2:
3221225472
My solution
function modulo(a, b) {
return a - Math.floor(a/b)*b;
}
function ToUint32(x) {
return modulo(parseInt(x), Math.pow(2, 32));
}
function revereBits(A){
A = A.toString(2);
while (A.length < 31){
A = "0"+A;
}
var reverse = 0;
var NO_OF_BITS = A.length;
for(var i = NO_OF_BITS; i >= 1; i--){
var temp = (parseInt(A, 2) & (1 << i - 1));
if(temp){
reverse |= 1 << (NO_OF_BITS - i);
}
}
if( reverse << 1 < 0 ) reverse = ToUint32(reverse << 1);
return reverse;
}
Now, in the line
if( reverse << 1 < 0 ) reverse = ToUint32(reverse << 1);
You see that I have to double the answer. I cannot, however, get the part of why is this required.
I took the approach from https://www.geeksforgeeks.org/write-an-efficient-c-program-to-reverse-bits-of-a-number/
Had to make few adjustments to it. For example, run the loop from 31 to 1 rather than 0 to 31. The latter gives negative values in first left shift operation for i = 0 itself.
Can someone please help in fixing this solution and point to the problem in this?
UPDATE - Problem is related to Bit manipulation. So guys, please don't answer or comment for anything consisting of in-built string functions of Javascript. Cheers!
You should be able to do it using just bitwise operators, and a typed array to solve the sign issue:
Update
Changing slightly approach of the rev function after #bryc comment. Since having multiple function for "history" purpose makes the answer difficult to read, I'm putting first the latest code.
However, I'm keeping the comments about the different steps – the rest can be found in the edit history.
function rev(x) {
x = ((x >> 1) & 0x55555555) | ((x & 0x55555555) << 1);
x = ((x >> 2) & 0x33333333) | ((x & 0x33333333) << 2);
x = ((x >> 4) & 0x0F0F0F0F) | ((x & 0x0F0F0F0F) << 4);
x = ((x >> 8) & 0x00FF00FF) | ((x & 0x00FF00FF) << 8);
x = (x >>> 16) | (x << 16);
return x >>> 0;
}
This is the same code you would write in other languages as well to reverse bits, the only difference here is the addition of the typed array.
As #harold pointed out in the comment, the zero-fill right shift returns an unsigned (it's the only bitwise operator to do so) therefore we can omit the typed array and just add >>> 0 before the return.
In fact, doing >>> 0 is commonly used to simulate the ToUint32 in JS polyfilll; e.g.:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/every
// 2. Let lenValue be the result of calling the Get internal method
// of O with the argument "length".
// 3. Let len be ToUint32(lenValue).
var len = O.length >>> 0;
function reverseBits(integer, bitLength) {
if (bitLength > 32) {
throw Error(
"Bit manipulation is limited to <= 32 bit numbers in JavaScript."
);
}
let result = 0;
for (let i = 0; i < bitLength; i++) {
result |= ((integer >> i) & 1) << (bitLength - 1 - i);
}
return result >>> 0; // >>> 0 makes it unsigned even if bit 32 (the sign bit) was set
}
Try this:
function reverseBits(num) {
let reversed = num.toString(2);
const padding = "0";
reversed = padding.repeat(32 - reversed.length) + reversed;
return parseInt(reversed.split('').reverse().join(''), 2);
}
console.log(reverseBits(0)); // 0
console.log(reverseBits(3)); // 3221225472

Is there a way to redefine the standard Ascii character set that Javascript charCodeAt and fromCharCode calls from within a function?

For encoding, Javascript pulls from the standard Anscii table for mapping characters. I found the following function below that brilliantly and correctly encodes to Anscii85/Base85. But I want to encode to the Z85 variation because it contains the set of symbols that I require. My understanding is that the Anscii85/Base85 encoding should work exactly the same, except that Z85 maps the values in a different order from the Anscii standard, and uses a different combination of symbols from the standard Ansii85 mapping as well. So the character set is the only difference:
Ansci85 uses the 85 characters, 32 through 126 (reference):
"!\"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstu
Z85 uses a custom set of 85 characters (reference):
0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ.-:+=^!/*?&<>()[]{}#%$#
My question is, is there any way to redefine the character set that charCodeAt and fromCharCode refer to in this function so that it would then encode in Z85?
// By Steve Hanov. Released to the public domain.
function encodeAscii85(input) {
// Remove Adobe standard prefix
// var output = "<~";
var chr1, chr2, chr3, chr4, chr, enc1, enc2, enc3, enc4, enc5;
var i = 0;
while (i < input.length) {
// Access past the end of the string is intentional.
chr1 = input.charCodeAt(i++);
chr2 = input.charCodeAt(i++);
chr3 = input.charCodeAt(i++);
chr4 = input.charCodeAt(i++);
chr = ((chr1 << 24) | (chr2 << 16) | (chr3 << 8) | chr4) >>> 0;
enc1 = (chr / (85 * 85 * 85 * 85) | 0) % 85 + 33;
enc2 = (chr / (85 * 85 * 85) | 0) % 85 + 33;
enc3 = (chr / (85 * 85) | 0 ) % 85 + 33;
enc4 = (chr / 85 | 0) % 85 + 33;
enc5 = chr % 85 + 33;
output += String.fromCharCode(enc1) +
String.fromCharCode(enc2);
if (!isNaN(chr2)) {
output += String.fromCharCode(enc3);
if (!isNaN(chr3)) {
output += String.fromCharCode(enc4);
if (!isNaN(chr4)) {
output += String.fromCharCode(enc5);
}
}
}
}
// Remove Adobe standard suffix
// output += "~>";
return output;
}
Extra notes:
Alternately, I thought I could use something like the following function, but the problem is that it doesn't properly encode Anscii85 in the first place. If it was correct, Hello world! should encode to 87cURD]j7BEbo80, but this function encodes it to RZ!iCB=*gD0D5_+ (reference).
I don't understand the algorithm enough to know what is wrong with the mapping here. Ideally, if it was encoding correctly, I should be able to update this function to use the Z85 character set:
// Adapted from: Ascii85 JavaScript implementation, 2012.10.16 Jim Herrero
// Original: https://jsfiddle.net/nderscore/bbKS4/
var Ascii85 = {
// Ascii85 mapping
_alphabet: "!\"#$%&'()*+,-./0123456789:;<=>?#"+
"ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`"+
"abcdefghijklmnopqrstu"+
"y"+ // short form 4 spaces (optional)
"z", // short form 4 nulls (optional)
// functions
encode: function(input) {
var alphabet = Ascii85._alphabet,
useShort = alphabet.length > 85,
output = "", buffer, val, i, j, l;
for (i = 0, l = input.length; i < l;) {
buffer = [0,0,0,0];
for (j = 0; j < 4; j++)
if(input[i])
buffer[j] = input.charCodeAt(i++);
for (val = buffer[3], j = 2; j >= 0; j--)
val = val*256+buffer[j];
if (useShort && !val)
output += alphabet[86];
else if (useShort && val == 0x20202020)
output += alphabet[85];
else {
for (j = 0; j < 5; j++) {
output += alphabet[val%85];
val = Math.floor(val/85);
}
}
}
return output;
}
};
Character codes are character codes. You can't change the behavior of String.fromCharCode() or String.charCodeAt().
However, you can store your custom character set in an array and use array indexing and Array.indexOf() to look up entries.
Updating this function to work with Z85 will be tricky, though, because String.fromCharCode() and String.charCodeAt() are used in two different contexts -- they're sometimes used to access the unencoded string (which doesn't need to change), and sometimes for the encoded string (which does). You will need to take care to not confuse the two.

Using ntlm authentication in Nativescript ios platform

I am building an app with authenticates the user against a sharepoint site which uses NTLM authentication. I found the ntlm.js which has been patched for nativescript here https://github.com/hdeshev/nativescript-ntlm-demo.
I have managed to get it working for android platform, but it fails on ios showing an 401 error. As far as I can tell, the difference happens in this segment:
Ntlm.setCredentials = function(domain, username, password) {
var magic = 'KGS!##$%'; // Create LM password hash.
var lmPassword = password.toUpperCase().substr(0, 14);
while (lmPassword.length < 14) lmPassword += '\0';
var key1 = Ntlm.createKey(lmPassword);
var key2 = Ntlm.createKey(lmPassword.substr(7));
var lmHashedPassword = des(key1, magic, 1, 0) + des(key2, magic, 1, 0);
var ntPassword = ''; // Create NT password hash.
for (var i = 0; i < password.length; i++)
ntPassword += password.charAt(i) + '\0';
var ntHashedPassword = str_md4(ntPassword);
Ntlm.domain = domain;
Ntlm.username = username;
Ntlm.lmHashedPassword = lmHashedPassword;
Ntlm.ntHashedPassword = ntHashedPassword;
};
When I log the result of 'lmhashedPassword' after going through the des() function, it simply returns 'A'. Whereas on android, it returns a longer string. Something in the des function must be cutting it off, but I cannot see what.
Here is the des function:
function des (key, message, encrypt, mode, iv, padding) {
//declaring this locally speeds things up a bit
var spfunction1 = new Array (0x1010400,0,0x10000,0x1010404,0x1010004,0x10404,0x4,0x10000,0x400,0x1010400,0x1010404,0x400,0x1000404,0x1010004,0x1000000,0x4,0x404,0x1000400,0x1000400,0x10400,0x10400,0x1010000,0x1010000,0x1000404,0x10004,0x1000004,0x1000004,0x10004,0,0x404,0x10404,0x1000000,0x10000,0x1010404,0x4,0x1010000,0x1010400,0x1000000,0x1000000,0x400,0x1010004,0x10000,0x10400,0x1000004,0x400,0x4,0x1000404,0x10404,0x1010404,0x10004,0x1010000,0x1000404,0x1000004,0x404,0x10404,0x1010400,0x404,0x1000400,0x1000400,0,0x10004,0x10400,0,0x1010004);
var spfunction2 = new Array (-0x7fef7fe0,-0x7fff8000,0x8000,0x108020,0x100000,0x20,-0x7fefffe0,-0x7fff7fe0,-0x7fffffe0,-0x7fef7fe0,-0x7fef8000,-0x80000000,-0x7fff8000,0x100000,0x20,-0x7fefffe0,0x108000,0x100020,-0x7fff7fe0,0,-0x80000000,0x8000,0x108020,-0x7ff00000,0x100020,-0x7fffffe0,0,0x108000,0x8020,-0x7fef8000,-0x7ff00000,0x8020,0,0x108020,-0x7fefffe0,0x100000,-0x7fff7fe0,-0x7ff00000,-0x7fef8000,0x8000,-0x7ff00000,-0x7fff8000,0x20,-0x7fef7fe0,0x108020,0x20,0x8000,-0x80000000,0x8020,-0x7fef8000,0x100000,-0x7fffffe0,0x100020,-0x7fff7fe0,-0x7fffffe0,0x100020,0x108000,0,-0x7fff8000,0x8020,-0x80000000,-0x7fefffe0,-0x7fef7fe0,0x108000);
var spfunction3 = new Array (0x208,0x8020200,0,0x8020008,0x8000200,0,0x20208,0x8000200,0x20008,0x8000008,0x8000008,0x20000,0x8020208,0x20008,0x8020000,0x208,0x8000000,0x8,0x8020200,0x200,0x20200,0x8020000,0x8020008,0x20208,0x8000208,0x20200,0x20000,0x8000208,0x8,0x8020208,0x200,0x8000000,0x8020200,0x8000000,0x20008,0x208,0x20000,0x8020200,0x8000200,0,0x200,0x20008,0x8020208,0x8000200,0x8000008,0x200,0,0x8020008,0x8000208,0x20000,0x8000000,0x8020208,0x8,0x20208,0x20200,0x8000008,0x8020000,0x8000208,0x208,0x8020000,0x20208,0x8,0x8020008,0x20200);
var spfunction4 = new Array (0x802001,0x2081,0x2081,0x80,0x802080,0x800081,0x800001,0x2001,0,0x802000,0x802000,0x802081,0x81,0,0x800080,0x800001,0x1,0x2000,0x800000,0x802001,0x80,0x800000,0x2001,0x2080,0x800081,0x1,0x2080,0x800080,0x2000,0x802080,0x802081,0x81,0x800080,0x800001,0x802000,0x802081,0x81,0,0,0x802000,0x2080,0x800080,0x800081,0x1,0x802001,0x2081,0x2081,0x80,0x802081,0x81,0x1,0x2000,0x800001,0x2001,0x802080,0x800081,0x2001,0x2080,0x800000,0x802001,0x80,0x800000,0x2000,0x802080);
var spfunction5 = new Array (0x100,0x2080100,0x2080000,0x42000100,0x80000,0x100,0x40000000,0x2080000,0x40080100,0x80000,0x2000100,0x40080100,0x42000100,0x42080000,0x80100,0x40000000,0x2000000,0x40080000,0x40080000,0,0x40000100,0x42080100,0x42080100,0x2000100,0x42080000,0x40000100,0,0x42000000,0x2080100,0x2000000,0x42000000,0x80100,0x80000,0x42000100,0x100,0x2000000,0x40000000,0x2080000,0x42000100,0x40080100,0x2000100,0x40000000,0x42080000,0x2080100,0x40080100,0x100,0x2000000,0x42080000,0x42080100,0x80100,0x42000000,0x42080100,0x2080000,0,0x40080000,0x42000000,0x80100,0x2000100,0x40000100,0x80000,0,0x40080000,0x2080100,0x40000100);
var spfunction6 = new Array (0x20000010,0x20400000,0x4000,0x20404010,0x20400000,0x10,0x20404010,0x400000,0x20004000,0x404010,0x400000,0x20000010,0x400010,0x20004000,0x20000000,0x4010,0,0x400010,0x20004010,0x4000,0x404000,0x20004010,0x10,0x20400010,0x20400010,0,0x404010,0x20404000,0x4010,0x404000,0x20404000,0x20000000,0x20004000,0x10,0x20400010,0x404000,0x20404010,0x400000,0x4010,0x20000010,0x400000,0x20004000,0x20000000,0x4010,0x20000010,0x20404010,0x404000,0x20400000,0x404010,0x20404000,0,0x20400010,0x10,0x4000,0x20400000,0x404010,0x4000,0x400010,0x20004010,0,0x20404000,0x20000000,0x400010,0x20004010);
var spfunction7 = new Array (0x200000,0x4200002,0x4000802,0,0x800,0x4000802,0x200802,0x4200800,0x4200802,0x200000,0,0x4000002,0x2,0x4000000,0x4200002,0x802,0x4000800,0x200802,0x200002,0x4000800,0x4000002,0x4200000,0x4200800,0x200002,0x4200000,0x800,0x802,0x4200802,0x200800,0x2,0x4000000,0x200800,0x4000000,0x200800,0x200000,0x4000802,0x4000802,0x4200002,0x4200002,0x2,0x200002,0x4000000,0x4000800,0x200000,0x4200800,0x802,0x200802,0x4200800,0x802,0x4000002,0x4200802,0x4200000,0x200800,0,0x2,0x4200802,0,0x200802,0x4200000,0x800,0x4000002,0x4000800,0x800,0x200002);
var spfunction8 = new Array (0x10001040,0x1000,0x40000,0x10041040,0x10000000,0x10001040,0x40,0x10000000,0x40040,0x10040000,0x10041040,0x41000,0x10041000,0x41040,0x1000,0x40,0x10040000,0x10000040,0x10001000,0x1040,0x41000,0x40040,0x10040040,0x10041000,0x1040,0,0,0x10040040,0x10000040,0x10001000,0x41040,0x40000,0x41040,0x40000,0x10041000,0x1000,0x40,0x10040040,0x1000,0x41040,0x10001000,0x40,0x10000040,0x10040000,0x10040040,0x10000000,0x40000,0x10001040,0,0x10041040,0x40040,0x10000040,0x10040000,0x10001000,0x10001040,0,0x10041040,0x41000,0x41000,0x1040,0x1040,0x40040,0x10000000,0x10041000);
//create the 16 or 48 subkeys we will need
var keys = des_createKeys (key);
var m=0, i, j, temp, temp2, right1, right2, left, right, looping;
var cbcleft, cbcleft2, cbcright, cbcright2
var endloop, loopinc;
var len = message.length;
var chunk = 0;
//set up the loops for single and triple des
var iterations = keys.length == 32 ? 3 : 9; //single or triple des
if (iterations == 3) {looping = encrypt ? new Array (0, 32, 2) : new Array (30, -2, -2);}
else {looping = encrypt ? new Array (0, 32, 2, 62, 30, -2, 64, 96, 2) : new Array (94, 62, -2, 32, 64, 2, 30, -2, -2);}
//pad the message depending on the padding parameter
if (padding == 2) message += " "; //pad the message with spaces
else if (padding == 1) {temp = 8-(len%8); message += String.fromCharCode (temp,temp,temp,temp,temp,temp,temp,temp); if (temp==8) len+=8;} //PKCS7 padding
else if (!padding) message += "\0\0\0\0\0\0\0\0"; //pad the message out with null bytes
//store the result here
result = "";
tempresult = "";
if (mode == 1) { //CBC mode
cbcleft = (iv.charCodeAt(m++) << 24) | (iv.charCodeAt(m++) << 16) | (iv.charCodeAt(m++) << 8) | iv.charCodeAt(m++);
cbcright = (iv.charCodeAt(m++) << 24) | (iv.charCodeAt(m++) << 16) | (iv.charCodeAt(m++) << 8) | iv.charCodeAt(m++);
m=0;
}
//loop through each 64 bit chunk of the message
while (m < len) {
left = (message.charCodeAt(m++) << 24) | (message.charCodeAt(m++) << 16) | (message.charCodeAt(m++) << 8) | message.charCodeAt(m++);
right = (message.charCodeAt(m++) << 24) | (message.charCodeAt(m++) << 16) | (message.charCodeAt(m++) << 8) | message.charCodeAt(m++);
//for Cipher Block Chaining mode, xor the message with the previous result
if (mode == 1) {if (encrypt) {left ^= cbcleft; right ^= cbcright;} else {cbcleft2 = cbcleft; cbcright2 = cbcright; cbcleft = left; cbcright = right;}}
//first each 64 but chunk of the message must be permuted according to IP
temp = ((left >>> 4) ^ right) & 0x0f0f0f0f; right ^= temp; left ^= (temp << 4);
temp = ((left >>> 16) ^ right) & 0x0000ffff; right ^= temp; left ^= (temp << 16);
temp = ((right >>> 2) ^ left) & 0x33333333; left ^= temp; right ^= (temp << 2);
temp = ((right >>> 8) ^ left) & 0x00ff00ff; left ^= temp; right ^= (temp << 8);
temp = ((left >>> 1) ^ right) & 0x55555555; right ^= temp; left ^= (temp << 1);
left = ((left << 1) | (left >>> 31));
right = ((right << 1) | (right >>> 31));
//do this either 1 or 3 times for each chunk of the message
for (j=0; j<iterations; j+=3) {
endloop = looping[j+1];
loopinc = looping[j+2];
//now go through and perform the encryption or decryption
for (i=looping[j]; i!=endloop; i+=loopinc) { //for efficiency
right1 = right ^ keys[i];
right2 = ((right >>> 4) | (right << 28)) ^ keys[i+1];
//the result is attained by passing these bytes through the S selection functions
temp = left;
left = right;
right = temp ^ (spfunction2[(right1 >>> 24) & 0x3f] | spfunction4[(right1 >>> 16) & 0x3f]
| spfunction6[(right1 >>> 8) & 0x3f] | spfunction8[right1 & 0x3f]
| spfunction1[(right2 >>> 24) & 0x3f] | spfunction3[(right2 >>> 16) & 0x3f]
| spfunction5[(right2 >>> 8) & 0x3f] | spfunction7[right2 & 0x3f]);
}
temp = left; left = right; right = temp; //unreverse left and right
} //for either 1 or 3 iterations
//move then each one bit to the right
left = ((left >>> 1) | (left << 31));
right = ((right >>> 1) | (right << 31));
//now perform IP-1, which is IP in the opposite direction
temp = ((left >>> 1) ^ right) & 0x55555555; right ^= temp; left ^= (temp << 1);
temp = ((right >>> 8) ^ left) & 0x00ff00ff; left ^= temp; right ^= (temp << 8);
temp = ((right >>> 2) ^ left) & 0x33333333; left ^= temp; right ^= (temp << 2);
temp = ((left >>> 16) ^ right) & 0x0000ffff; right ^= temp; left ^= (temp << 16);
temp = ((left >>> 4) ^ right) & 0x0f0f0f0f; right ^= temp; left ^= (temp << 4);
//for Cipher Block Chaining mode, xor the message with the previous result
if (mode == 1) {if (encrypt) {cbcleft = left; cbcright = right;} else {left ^= cbcleft2; right ^= cbcright2;}}
tempresult += String.fromCharCode ((left>>>24), ((left>>>16) & 0xff), ((left>>>8) & 0xff), (left & 0xff), (right>>>24), ((right>>>16) & 0xff), ((right>>>8) & 0xff), (right & 0xff));
chunk += 8;
if (chunk == 512) {result += tempresult; tempresult = ""; chunk = 0;}
} //for every 8 characters, or 64 bits in the message
//return the result as an array
return result + tempresult;
} //end of des
In case it may be relevant, I have changed the way the request is made too. When the user clicks login, the following promise is called:
Ntlm.login('url')
.then(() => {
console.log('Success');
appSettings.setString('token', 'abc123');
this.router.navigate(['/ilt']);
})
.catch(error => {
console.log('Failed');
appSettings.remove('token');
alert('Failed! ' + error );
})
I created a new login function in the ntlm.js file:
Ntlm.login = function(url) {
return new Promise((resolve, reject) => {
if (!Ntlm.domain || !Ntlm.username || !Ntlm.lmHashedPassword || !Ntlm.ntHashedPassword) {
Ntlm.error('No NTLM credentials specified. Use Ntlm.setCredentials(...) before making calls.');
}
var hostname = Ntlm.getLocation(url).hostname;
var msg1 = Ntlm.createMessage1(hostname);
var request = new XMLHttpRequest();
request.onload = function() {
var response = request.getResponseHeader('WWW-Authenticate');
var challenge = Ntlm.getChallenge(response);
var msg3 = Ntlm.createMessage3(challenge, hostname);
request.open('GET', url, false);
var authorization = 'NTLM ' + msg3.toBase64();
request.setRequestHeader('Authorization', authorization);
request.onload = function() {
if (request.readyState == 4 && request.status == 200) {
resolve(request.status);
}
else if (request.readyState == 4 && request.status != 200) {
reject(request.status);
}
};
request.send(null);
};
request.open('GET', url, false);
request.setRequestHeader('Authorization', 'NTLM ' + msg1.toBase64());
request.send(null);
})
};
This is all working fine on the Android version, just cant understand why it isnt on ios. Very frustrating! If anyone can make sense of this, I would be eternally grateful. I realise it is a lot of code and quite niche area!
Many thanks,
UPDATE
I think there may be a difference in the way console.log behaves in Android and iOS, which could explain some of the missing characters. I created a new test account (testuser / testing), and logged various points to try and establish what was happening in the NTLM process step by step. Here are the logs for android:
NTLM WALKTHROUGH ON ANDROID
Step 1: Creates a cryptographic hash of the users password:
lmHashedPassword = -UE}{}*ªÓ´5µî
ntHashedPassword = |SÏ¥ê}�;�� ûQ£õ
Step 2: Sends first request to the server, with the following Authorisation header:
NTLM TlRMTVNTUAABAAAAA7IAAAUABQBEAAAAJAAkACAAAABHQVRFV0FZLlNUUEFVTFNDQVRIT0xJQ0NPTExFR0UuQ08uVUtBRE1JTg==
Step 3: Server sends a challenge back to client:
¡2#�³Q%Ï
Step 4: Client encrypts this challenge with the hash of the users password and sends back to server (response).
The Authorization header is: NTLM TlRMTVNTUAADAAAAGAAYAKQAAAAYABgAvAAAAAoACgBAAAAAEgASAEoAAABIAEgAXAAAAAAAAADUAAAAAYIAAEEARABNAEkATgB0AGUAcwB0AHMAdABhAGYAZgBHAEEAVABFAFcAQQBZAC4AUwBUAFAAQQBVAEwAUwBDAEEAVABIAE8ATABJAEMAQwBPAEwATABFAEcARQAuAEMATwAuAFUASwBsEslcvTQhhY3+RgKtqufBzFrmufFKNkAHXJRcA6ThOAU105+NJBGnsn2ri6Ziuv8=
Step 5: Now the server has sent the username, challenge and response to the Domain Controller.
The DC compares and returns status of: 200
Here are the logs for iOS:
NTLM WALKTHROUGH ON IOS
Step 1: Creates a cryptographic hash of the users password:
lmHashedPassword = -UE}{}*ªÓ´5µî
ntHashedPassword = |SÏ¥ê}�;�� ûQ£õ
Step 2: Sends the first request to the server, with the following Authorisation header:
NTLM TlRMTVNTUAABAAAAA7IAAAUABQBEAAAAJAAkACAAAABHQVRFV0FZLlNUUEFVTFNDQVRIT0xJQ0NPTExFR0UuQ08uVUtBRE1JTg==
Step 3: Server sends a challenge back to client:
q�v¹,
Step 4: Client encrypts this challenge with the hash of the users password and sends back to server (response).
The Authorization header is: NTLM TlRMTVNTUAADAAAAGAAYAKQAAAAYABgAvAAAAAoACgBAAAAAEgASAEoAAABIAEgAXAAAAAAAAADUAAAAAYIAAEEARABNAEkATgB0AGUAcwB0AHMAdABhAGYAZgBHAEEAVABFAFcAQQBZAC4AUwBUAFAAQQBVAEwAUwBDAEEAVABIAE8ATABJAEMAQwBPAEwATABFAEcARQAuAEMATwAuAFUASwAP9HN5WjPCs9hMRrmttnYHieFrThwyUAWanKWtVdzOqDOJ2isUdQeV0ISmv9TT0ek=
Step 5: Now the server has sent the username, challenge and response to the Domain Controller.
The DC compares returns status of: 401
Seems the credentials are worked out the same, and then the challenge returned from the server is random. But on iOS, the challenge seems to be missing characters - possibly due to the type of characters. The client then encrypts the challenge with the hashed passwords and sends back to the server. I imagine it might be this part which is not correct on iOS.

Javascript: unicode character to BYTE based hex escape sequence (NOT surrogates)

In javascript I am trying to make unicode into byte based hex escape sequences that are compatible with C:
ie. 😄
becomes: \xF0\x9F\x98\x84 (correct)
NOT javascript surrogates, not \uD83D\uDE04 (wrong)
I cannot figure out the math relationship between the four bytes C wants vs the two surrogates javascript uses. I suspect the algorithm is far more complex than my feeble attempts.
Thanks for any tips.
encodeURIComponent does this work:
var input = "\uD83D\uDE04";
var result = encodeURIComponent(input).replace(/%/g, "\\x"); // \xF0\x9F\x98\x84
Upd: Actually, C strings can contain digits and letters without escaping, but if you really need to escape them:
function escape(s, escapeEverything) {
if (escapeEverything) {
s = s.replace(/[\x10-\x7f]/g, function (s) {
return "-x" + s.charCodeAt(0).toString(16).toUpperCase();
});
}
s = encodeURIComponent(s).replace(/%/g, "\\x");
if (escapeEverything) {
s = s.replace(/\-/g, "\\");
}
return s;
}
Found a solution here: http://jonisalonen.com/2012/from-utf-16-to-utf-8-in-javascript/
I would have never figured out THAT math, wow.
somewhat minified
function UTF8seq(s) {
var i,c,u=[];
for (i=0; i < s.length; i++) {
c = s.charCodeAt(i);
if (c < 0x80) { u.push(c); }
else if (c < 0x800) { u.push(0xc0 | (c >> 6), 0x80 | (c & 0x3f)); }
else if (c < 0xd800 || c >= 0xe000) { u.push(0xe0 | (c >> 12), 0x80 | ((c>>6) & 0x3f), 0x80 | (c & 0x3f)); }
else { i++; c = 0x10000 + (((c & 0x3ff)<<10) | (s.charCodeAt(i) & 0x3ff));
u.push(0xf0 | (c >>18), 0x80 | ((c>>12) & 0x3f), 0x80 | ((c>>6) & 0x3f), 0x80 | (c & 0x3f)); }
}
for (i=0; i < u.length; i++) { u[i]=u[i].toString(16); }
return '\\x'+u.join('\\x');
}
Your C code expects an UTF-8 string (the symbol is represented as 4 bytes). The JS representation you see is UTF-16 however (the symbol is represented as 2 uint16s, a surrogate pair).
You will first need to get the (Unicode) code point for your symbol (from the UTF-16 JS string), then build the UTF-8 representation for it from that.
Since ES6 you can use the codePointAt method for the first part, which I would recommend using as a shim even if not supported. I guess you don't want to decode surrogate pairs yourself :-)
For the rest, I don't think there's a library method, but you can write it yourself according to the spec:
function hex(x) {
x = x.toString(16);
return (x.length > 2 ? "\\u0000" : "\\x00").slice(0,-x.length)+x.toUpperCase();
}
var c = "😄";
console.log(c.length, hex(c.charCodeAt(0))+hex(c.charCodeAt(1))); // 2, "\uD83D\uDE04"
var cp = c.codePointAt(0);
var bytes = new Uint8Array(4);
bytes[3] = 0x80 | cp & 0x3F;
bytes[2] = 0x80 | (cp >>>= 6) & 0x3F;
bytes[1] = 0x80 | (cp >>>= 6) & 0x3F;
bytes[0] = 0xF0 | (cp >>>= 6) & 0x3F;
console.log(Array.prototype.map.call(bytes, hex).join("")) // "\xf0\x9f\x98\x84"
(tested in Chrome)

How do I encrypt Crypto-JS keys with JSBN?

I'm using JSBN to encrypt/decrypt data using public/private keypairs. It works great for text data, including hex strings.
My problem is now I have binary data, specifically Crypto-JS Word Arrays, that I need to encrypt with a public key and send along to another platform.
So consider this:
var key = CryptoJS.lib.WordArray.random(256/8);
var rsa = new RSAKey();
rsa.setPublic(modulus, exponent);
var encrypted_key = rsa.encrypt(key.toString());
This works but it means 'encrypted_key' is infact a hex string that's been encrypted, not the actual key. I need to encrypt the actual key.
So I see two challenges here:
1) I'm not 100% sure how to get the actual bytes out of a CryptoJS.lib.WordArray -- though that doesn't seem totally insurmountable.
2) I have no idea if it's even possible to encrypt binary data using JSBN. I'd love pointers to figure out how to do it.
Any thoughts?
The JSBN library contains a function, namely pkcs1pad2(), wherein it converts the text to numeric values using JavaScript's charCodeAt() function. You'll see that conversion code in the first while() loop:
function pkcs1pad2(s,n) {
if(n < s.length + 11) { // TODO: fix for utf-8
alert("Message too long for RSA");
return null;
}
var ba = new Array();
var i = s.length - 1;
while(i >= 0 && n > 0) {
var c = s.charCodeAt(i--);
if(c < 128) { // encode using utf-8
ba[--n] = c;
}
else if((c > 127) && (c < 2048)) {
ba[--n] = (c & 63) | 128;
ba[--n] = (c >> 6) | 192;
}
else {
ba[--n] = (c & 63) | 128;
ba[--n] = ((c >> 6) & 63) | 128;
ba[--n] = (c >> 12) | 224;
}
}
ba[--n] = 0;
var rng = new SecureRandom();
var x = new Array();
while(n > 2) { // random non-zero pad
x[0] = 0;
while(x[0] == 0) rng.nextBytes(x);
ba[--n] = x[0];
}
ba[--n] = 2;
ba[--n] = 0;
return new BigInteger(ba);
}
If you wish to encrypt binary data then you'll likely have to modify this function so it converts the input in the way you want it.
Below is an example of pkcs1pad2() modified to accept binary data in the form of a hex string. If you use this version of pkcs1pad2() then you can convert your CryptoJS.lib.WordArray into hex and pass that hex string to rsa.encrypt().
function pkcs1pad2(hexPlaintext,n) {
if(n < hexPlaintext.length/2 + 11) {
alert("Message too long for RSA");
return null;
}
var ba = new Array();
var i = hexPlaintext.length;
while(i >= 2 && n > 0) {
ba[--n] = parseInt(hexPlaintext.slice(i-2,i),16);
i-=2;
}
ba[--n] = 0;
var rng = new SecureRandom();
var x = new Array();
while(n > 2) { // random non-zero pad
x[0] = 0;
while(x[0] == 0) rng.nextBytes(x);
ba[--n] = x[0];
}
ba[--n] = 2;
ba[--n] = 0;
return new BigInteger(ba);
}
Alternatively, you could modify it to take the WordArray directly and convert that to the array format that is used by JSBN, but I'll leave that as an exercise for the reader.
the pkcs1pad2 function converted from javascript to java:
public BigInteger pkcs1pad2(String data,int keysize){
byte[] buffer=new byte[keysize];
Random rg=new Random();
if(keysize < data.length()+11)
return null;
int i = data.length() - 1;
while(i >= 0 && keysize > 0){
--keysize;
buffer[keysize] = (byte) data.charAt(i);
i--;
}
--keysize;
buffer[keysize] = 0;
while(keysize > 2){
--keysize;
buffer[keysize] = (byte) (rg.nextInt(254)+1);
}
--keysize;
buffer[keysize] = 2;
--keysize;
buffer[keysize] = 0;
return new BigInteger(buffer);
}
the rsa encription:
http://hc.apache.org/downloads.cgi
//you need httpcomponents-client-4.3.1-bin.zip from apache.org
//this contains working Base64 encoder!
import org.apache.commons.codec.binary.Base64;
public String encrypt(String data,String modulus,String exponent) throws UnsupportedEncodingException{
byte[] exp=Helper.hexToBytes(exponent.toCharArray());
byte[] mod=Helper.hexToBytes(modulus.toCharArray());
BigInteger expB=new BigInteger(exp);
BigInteger modB=new BigInteger(mod);
BigInteger data2=this.pkcs1pad2(data, (modB.bitLength()+7)>>3);
BigInteger data3=data2.modPow(expB, modB);
byte[] encoding = (new Base64()).encode(Helper.hexToBytes(data3.toString(16).toCharArray()));
return new String(encoding, "US-ASCII");
}
and the Helper.HexToBytes:
public static byte[] hexToBytes(char[] hex)throws IllegalArgumentException{
byte[] data = new byte[hex.length / 2];
for (int i = 0, j = 0; j < data.length; ++j){
int hi = Character.digit(hex[i++], 16);
int lo = Character.digit(hex[i++], 16);
if ((hi < 0) || (lo < 0))
throw new IllegalArgumentException();
data[j] = (byte) (hi << 4 | lo);
}
return data;
}

Categories

Resources