CryptoJS - Decrypt an encrypted file - javascript

I'm trying to write an application to do end-to-end encryption for files with JS in browser. However I don't seem to be able to get all files decrypted correctly.
TL;DR As it's impractical to encrypt files bigger than 1MB as a whole, I'm trying to encrypt them chunk by chunk. After doing so I try to write the encrypted words (resulted from CryptoJS's WordArray) into a blob. As for decryption I read the files and split them to chunks according to map generated while encrypting the chunks and try to decrypt them. The problem is decrypted result is 0 bits!
I guess I'm not reading the chunks while decrypting correctly. Please take a look at the code below for the function getBlob (writing data to the blob) and the last part of decryptFile for reading chunks.
More explanation
I'm using CryptoJS AES with default settings.
Right now my code looks like this:
function encryptFile (file, options, resolve, reject) {
if (!options.encrypt) {
return resolve(file)
}
if (!options.processor || !options.context) {
return reject('No encryption method.')
}
function encryptBlob (file, optStart, optEnd) {
const start = optStart || 0
let stop = optEnd || CHUNK_SIZE
if (stop > file.size - 1) {
stop = file.size
}
const blob = file.slice(start, stop)
const fileReader = new FileReader()
fileReader.onloadend = function () {
if (this.readyState !== FileReader.DONE) return
const index = Math.ceil(optStart / CHUNK_SIZE)
const result = CryptoJS.lib.WordArray.create(this.result)
encryptedFile[index] = encrypt(result)
chunksResolved++
if (chunksResolved === count) {
const {sigBytes, sigBytesMap, words} = getCipherInfo(encryptedFile)
const blob = getBlob(sigBytes, words)
resolve(blob, Object.keys(sigBytesMap))
}
}
fileReader.readAsArrayBuffer(blob)
}
let chunksResolved = 0
const encryptedFile = []
const CHUNK_SIZE = 1024*1024
const count = Math.ceil(file.size / CHUNK_SIZE)
const encrypt = value => options.processor.call(
options.context, value, 'file',
(v, k) => CryptoJS.AES.encrypt(v, k))
for (let start = 0; (start + CHUNK_SIZE) / CHUNK_SIZE <= count; start+= CHUNK_SIZE) {
encryptBlob(file, start, start + CHUNK_SIZE - 1)
}
}
As you can see I'm trying to read the file chunk by chunk (each chunk is 1MB or fileSize % 1MB) as ArrayBuffer, converting it to WordArray for CryptoJS to understand and encrypt it.
After encrypting all the chunks I try to write each word they have to a blob (using a code I found in CryptoJS's issues in Google Code, mentioned below) and I guess here is what goes wrong. I also generated a map for where encrypted chunks end so I can later use it to get the chunks out of the binary file for decryption.
And here's how I decrypt the files:
function decryptFile (file, sigBytesMap, filename, options, resolve, reject) {
if (!options.decrypt) {
return resolve(file)
}
if (!options.processor || !options.context) {
return reject('No decryption method.')
}
function decryptBlob (file, index, start, stop) {
const blob = file.slice(start, stop)
const fileReader = new FileReader()
fileReader.onloadend = function () {
if (this.readyState !== FileReader.DONE) return
const result = CryptoJS.lib.WordArray.create(this.result)
decryptedFile[index] = decrypt(result)
chunksResolved++
if (chunksResolved === count) {
const {sigBytes, words} = getCipherInfo(decryptedFile)
const finalFile = getBlob(sigBytes, words)
resolve(finalFile, filename)
}
}
fileReader.readAsArrayBuffer(blob)
}
let chunksResolved = 0
const count = sigBytesMap.length
const decryptedFile = []
const decrypt = value => options.processor.call(
options.context, value, 'file',
(v, k) => CryptoJS.AES.decrypt(v, k))
for (let i = 0; i < count; i++) {
decryptBlob(file, i, parseInt(sigBytesMap[i - 1]) || 0, parseInt(sigBytesMap[i]) - 1)
}
}
Decryption is exactly like the encryption but doesn't work. Although chunks are not 1MB anymore, they are limited to sigBytes mentioned in the map. There is no result for the decryption! sigBytes: 0.
Here's the code for generating a blob and getting sigbytesMap:
function getCipherInfo (ciphers) {
const sigBytesMap = []
const sigBytes = ciphers.reduce((tmp, cipher) => {
tmp += cipher.sigBytes || cipher.ciphertext.sigBytes
sigBytesMap.push(tmp)
return tmp
}, 0)
const words = ciphers.reduce((tmp, cipher) => {
return tmp.concat(cipher.words || cipher.ciphertext.words)
}, [])
return {sigBytes, sigBytesMap, words}
}
function getBlob (sigBytes, words) {
const bytes = new Uint8Array(sigBytes)
for (var i = 0; i < sigBytes; i++) {
const byte = (words[i >>> 2] >>> (24 - (i % 4) * 8)) & 0xff
bytes[i] = byte
}
return new Blob([ new Uint8Array(bytes) ])
}
I'm guessing the issue is the method I'm using to read the encrypted chunks. Or maybe writing them!
I should also mention that previously I was doing something different for encryption. I was stringifying each WordArray I got as the result for CryptoJS.AES.encrypt using the toString method with the default encoding (which I believe is CryptoJS.enc.Hex) but some files didn't decrypt correctly. It didn't have anything to do with the size of the original file, rather than their types. Again, I'm guessing!

Turns out the problem was the WordArray returned by CryptoJS.AES.decrypt(value, key) has 4 extra words as padding which should not be included in the final result. CryptoJS tries unpadding the result but only changes sigBytes accordingly and doesn't change words. So when decrypting, before writing chunks to file pop those extra words. 4 words for full chunks and 3 for smaller ones (last chunk).

check this issue
import CryptoJS from "crypto-js";
async function encryptBlobToBlob(blob: Blob, secret: string): Promise<Blob> {
const wordArray = CryptoJS.lib.WordArray.create(await blob.arrayBuffer());
const result = CryptoJS.AES.encrypt(wordArray, secret);
return new Blob([result.toString()]);
}
export async function decryptBlobToBlob(blob: Blob, secret: string): Promise<Blob> {
const decryptedRaw = CryptoJS.AES.decrypt(await blob.text(), secret);
return new Blob([wordArrayToByteArray(decryptedRaw)]);
}
function wordToByteArray(word, length) {
const ba = [];
const xFF = 0xff;
if (length > 0) ba.push(word >>> 24);
if (length > 1) ba.push((word >>> 16) & xFF);
if (length > 2) ba.push((word >>> 8) & xFF);
if (length > 3) ba.push(word & xFF);
return ba;
}
function wordArrayToByteArray({ words, sigBytes }: { sigBytes: number; words: number[] }) {
const result = [];
let bytes;
let i = 0;
while (sigBytes > 0) {
bytes = wordToByteArray(words[i], Math.min(4, sigBytes));
sigBytes -= bytes.length;
result.push(bytes);
i++;
}
return new Uint8Array(result.flat());
}
async function main() {
const secret = "bbbb";
const blob = new Blob(["1".repeat(1e3)]);
const encryptedBlob = await encryptBlobToBlob(blob, secret);
console.log("enrypted blob size", encryptedBlob.size);
const decryptedBlob = await decryptBlobToBlob(encryptedBlob, secret);
console.log("decryptedBlob", decryptedBlob);
console.log(await decryptedBlob.text());
}
main();

Related

reading a big file in chunks and adding to object

I am trying to read a big file in chunks instead of loading it directly to memory using nodejs. My goal is to read the file but cannot load it into memory as the file is big and then group the anagrams and then output them.
I started following the article described here
It basically involves creating a shared buffer at the beginning of the program and passing it down.
Essentially it involves the following functions
function readBytes(fd, sharedBuffer) {
return new Promise((resolve, reject) => {
fs.read(fd, sharedBuffer, 0, sharedBuffer.length, null, (err) => {
if (err) {
return reject(err);
}
resolve();
});
});
}
async function* generateChunks(filePath, size) {
const sharedBuffer = Buffer.alloc(size);
const stats = fs.statSync(filePath); // file details
const fd = fs.openSync(filePath); // file descriptor
let bytesRead = 0; // how many bytes were read
let end = size;
for (let i = 0; i < Math.ceil(stats.size / size); i++) {
await readBytes(fd, sharedBuffer);
bytesRead = (i + 1) * size;
if (bytesRead > stats.size) {
// When we reach the end of file,
// we have to calculate how many bytes were actually read
end = size - (bytesRead - stats.size);
}
yield sharedBuffer.slice(0, end);
}
}
I then call it in main like the following. My goal is to group all the anagrams and then output them. However the issue I am having is that when I run the program the first 99,000 items I can access via console.log(Object.values(result)[99000]); however after that I am getting undefined. Any ideas what I am doing wrong?
const CHUNK_SIZE = 10000000; // 10MB
async function main() {
let result = {};
for await (const chunk of generateChunks("Data/example2.txt", CHUNK_SIZE)) {
let words = chunk.toString("utf8").split("\n");
for (let word of words) {
let cleansed = word.split("").sort().join("");
if (result[cleansed]) {
result[cleansed].push(word);
} else {
result[cleansed] = [word];
}
}
}
console.log(Object.values(result)[99000]);
return Object.values(result);
}

Encryption javascript to-from c++, qt

Yes, there is 100 topics about it.
Yes most of them are snippets/parts of code that answer the specific problem and don't actually help.
So perhaps this topic would help provide a "complete" solution for symmetric and maybe if some1 would be willing to help with asymmetric private/public key example.
So here are pre-reqs
javascript:
npm intall crypto
c++
https://github.com/QuasarApp/Qt-AES/tree/master
and Qt
Now In order to do encryption the tutorial on this page works quite well >
Example 2 > https://www.geeksforgeeks.org/node-js-crypto-createdecipheriv-method/?ref=lbp
Now as far as I can tell, say we create our Key - password :
const password = 'mySuperFancyPassword';
// Defining key
export const key = crypto.scryptSync(password, 'salt', 32);
This password is not the same as the one we would made in C++ using >
QAESEncryption encryption(QAESEncryption::AES_256, QAESEncryption::CBC,QAESEncryption::PKCS7);
QString key("mySuperFancyPassword");
QByteArray hashKey = QCryptographicHash::hash(key.toLocal8Bit(), QCryptographicHash::Sha256);
QByteArray decodeText = encryption.decode(jsByteArray, hashKey , jsIv);
Because Qt-AES takes Hash rather than whatever crypto.scryptSync() produces.
I suppose the question is... how can I match these 2 passwords?
If I were to pass javascript key-hex to C++ and convert it to byte array (auto key = QByteArray::fromHex(hexByte)) C++ library will decompile the string properly and with PKCS7 padding it will match javascript.
Now I know that I should use OpenSSL as that is standard, but every time I look at it I want to cry.
So this library here seems to be very dummy friendly so far...
However, if any1 is interested in openSSL, there is this interesting "file" > https://github.com/soroush/qtz-security/blob/dev/lib/src/crypto.cpp
That shows how to do it OpenSSL but I get error 0 in
error_code = EVP_DecryptFinal_ex(ctx, plaintext + len, &len);
indecryptRawData(const QByteArray& input, const QByteArray& rawKey, const QByteArray& rawIV)
So same issue, black magic! I did match my EVP_aes_256_cbc settings between JS and C++ in second library.
Bottom line, can any1 help me convert the KEY to properly match between C++ and javascript?
Or help with second lib openSSL? But I take its the same issue of salt/key generation...
UPDATE!
Big thanks to #absolute.madness for his solution!
Also, I found another way of... "partially" solving the problem.
I found out that crypto has PKCS5_PBKDF2_HMAC support too! So here is a proposed workflow for that one, however even tho I can send from Javascript > C++, I can't send C++ > Javascript using the QAESEncryption library due to (I think) incorrect padding...? As I crash at
decrypted = Buffer.concat([decrypted, decipher.final()]); .final() statement I think.
Here is Javascript & C++ code that I got working up to 50%.
JS:
// Defining password
const password: string = process.env.KEY_LICENSE_GENERIC! as string
// Defining key
var key: Buffer
crypto.pbkdf2(password, 'salt_', 10000, 32,
'sha256', (err, derivedKey) => {
if (err) {
throw new Error();
}
key = derivedKey
})
const iv = crypto.randomBytes(16);
export function encrypt2(text: string) {
// Creating Cipheriv with its parameter
let cipher = crypto.createCipheriv('aes-256-cbc', Buffer.from(key), iv);
// Updating text
let encrypted = cipher.update(text);
// Using concatenation iv + encrypt + enging & padding?
encrypted = Buffer.concat([iv, encrypted, cipher.final()]);
return encrypted.toString('hex')
}
// A decrypt function
export function decrypt2(text: string) {
let rawData = Buffer.from(text, 'hex');
if (rawData.length > 16) {
let iv = rawData.subarray(0, 16) // We put IV as 1st 16 bytes.
let encr = rawData.subarray(16, rawData.length)
// Creating Decipher
let decipher = crypto.createDecipheriv(
'aes-256-cbc', Buffer.from(key), iv);
// Updating encrypted text
let decrypted = decipher.update(encr);
decrypted = Buffer.concat([decrypted, decipher.final()]);
return decrypted.toString()
}
return ""
}
c++
#include <openssl/rand.h>
#include <openssl/hmac.h>
#include <openssl/evp.h>
QByteArray generateKey(const QByteArray &phrase, bool encode, const int iterations) {
const int length = 32;
QByteArray salt("salt_");
unsigned char key[length];
PKCS5_PBKDF2_HMAC(
phrase.data(), phrase.size(),
(const unsigned char *) (salt.data()), salt.size(),
iterations, EVP_sha256(),
length, key
);
return encode ? QByteArray((const char *) (key), length).toBase64(QByteArray::Base64UrlEncoding) : QByteArray((const char *) (key), length);
}
QByteArray randomBytes(int size) {
QByteArray bytes(size, char(0));
if (RAND_bytes((unsigned char *) (bytes.data()), bytes.size()) != 1) {
QRandomGenerator::securelySeeded().fillRange((quint32 *) (bytes.data()), bytes.size() / sizeof(quint32));
}
return bytes;
}
void decrypt(){
QByteArray hexEnc = reply.readAll(); // QNetworkReply*
QByteArray enc = QByteArray::fromHex(hexEnc.toUtf8());
auto iv = enc.mid(0, 16);
enc = enc.mid(16, enc.size());
QAESEncryption encryption(QAESEncryption::AES_256,
QAESEncryption::CBC,QAESEncryption::PKCS7);
QByteArray decodeText = encryption.decode(enc, generateKey("Fancy
password", false, 10000), iv);
/// Remove padding, I think this is missing when we encrypt.
QString decodedString = QString(encryption.removePadding(decodeText ));
}
void encrypt(){
auto iv = randomBytes(16);
auto encrypted = encryption.encode("Hello test code",
generateKey("Fancy password", false, 10000), iv); // bad encrypt, js will crash.
}
You cannot just use SHA-256 to match scrypt key derivation algorithm, obviously. Scrypt is defined in RFC 7914 and it's not (as of yet) implemented in Qt via its interfaces. OpenSSL (used by Qt) supports it on the other hand. I added 2 implementations of the Node.js example1 which you reference: the first one uses OpenSSL & Qt-AES, the second uses pure OpenSSL. Initially, I got an error from EVP_DecryptFinal_ex similar to what you described. When I started to debug it turned out that EVP_DecodeBlock was returning incorrect size when decoding from base64. After using EVP_DecodeInit/EVP_DecodeUpdate/EVP_DecodeFinal to handle base64 instead of EVP_DecodeBlock as was suggested here the error was gone.
I include the c++ code which roughly translates js-code from example 1 to c++ (I used OpenSSL 1.1.1q for testing):
#include <QDebug>
#include <openssl/aes.h>
#include <openssl/evp.h>
#include <openssl/kdf.h>
#include "qaesencryption.h"
void error(const char *msg)
{
qCritical(msg);
}
#define ERROR(msg) \
{ \
qCritical(msg); \
return; \
}
// scrypt key derivation function/algorithm, see also
// https://www.openssl.org/docs/man1.1.1/man7/scrypt.html
// returns -1 on error and 1 on success
int scrypt_kdf(unsigned char *key, size_t *keylen,
const unsigned char *pass, size_t passlen,
const unsigned char *salt, size_t saltlen,
uint64_t N = 16384, uint64_t r = 8, uint64_t p = 1)
{
// Note, default values for N, r, p are taken from
// https://nodejs.org/api/crypto.html#cryptoscryptsyncpassword-salt-keylen-options
EVP_PKEY_CTX *kctx;
int ret = 1;
kctx = EVP_PKEY_CTX_new_id(EVP_PKEY_SCRYPT, NULL);
if(EVP_PKEY_derive_init(kctx) <= 0)
{
error("EVP_PKEY_derive_init failed");
ret = -1;
}
if(1 == ret && EVP_PKEY_CTX_set1_pbe_pass(kctx, pass, passlen) <= 0)
{
error("EVP_PKEY_CTX_set1_pbe_pass failed");
ret = -1;
}
if(1 == ret && EVP_PKEY_CTX_set1_scrypt_salt(kctx, salt, saltlen) <= 0)
{
error("EVP_PKEY_CTX_set1_scrypt_salt failed");
ret = -1;
}
if(1 == ret && EVP_PKEY_CTX_set_scrypt_N(kctx, N) <= 0)
{
error("EVP_PKEY_CTX_set_scrypt_N failed");
ret = -1;
}
if (1 == ret && EVP_PKEY_CTX_set_scrypt_r(kctx, 8) <= 0)
{
error("EVP_PKEY_CTX_set_scrypt_r failed");
ret = -1;
}
if (1 == ret && EVP_PKEY_CTX_set_scrypt_p(kctx, 1) <= 0)
{
error("EVP_PKEY_CTX_set_scrypt_p failed");
ret = -1;
}
if (1 == ret && EVP_PKEY_derive(kctx, key, keylen) <= 0)
{
error("EVP_PKEY_derive failed");
ret = -1;
}
EVP_PKEY_CTX_free(kctx);
return ret;
}
// we use OpenSSL for scrypt key derivation algorithm and Qt/Qt-AES for decryption
void example1_openssl_and_qt(void)
{
unsigned char key[24];
size_t sz_key = sizeof(key);
const char password[] = "bncaskdbvasbvlaslslasfhj";
const char salt[] = "GfG";
QByteArray iv(16, char(0));
QByteArray encrypted = QByteArray::fromBase64("MfHwhG/WPv+TIbG/qM78qA==");
// you can also try
// encrypted = QByteArray::fromBase64(
// "j9QsjAFxuIAK0zvi5Iq2Z2+mo44RRpR2VMnJTNS7Ey0IkPjsGSJ+A+OPuvAqGO77Ww"
// "S2rI0dnJVREkFz0v8hug==");
if(scrypt_kdf(
key, &sz_key, reinterpret_cast<const unsigned char*>(password),
sizeof(password)-1, reinterpret_cast<const unsigned char*>(salt),
sizeof(salt)-1) <= 0)
{
ERROR("Key derivation failed");
}
OPENSSL_assert(sz_key == sizeof(key));
QAESEncryption encryption(QAESEncryption::AES_192, QAESEncryption::CBC,
QAESEncryption::PKCS7);
QByteArray decrypted = encryption.decode(
encrypted, QByteArray(reinterpret_cast<char*>(key), sizeof(key)), iv);
qDebug() << decrypted;
}
// we use qt only for base64 decoding
void example1_pure_openssl(void)
{
int len; // general purpose length variable, used in EVP_*Update/EVP_*Final
EVP_ENCODE_CTX *b64ctx;
unsigned char key[24];
size_t sz_key = sizeof(key);
EVP_CIPHER_CTX *dctx;
const char password[] = "bncaskdbvasbvlaslslasfhj";
const char salt[] = "GfG";
unsigned char iv[16] = { 0 }; // 16 zero bytes
char encrypted_b64[] = "MfHwhG/WPv+TIbG/qM78qA==";
// you can also try
// char encrypted_b64[] = "j9QsjAFxuIAK0zvi5Iq2Z2+mo44RRpR2VMnJTNS7Ey0IkPjsG"
// "SJ+A+OPuvAqGO77WwS2rI0dnJVREkFz0v8hug==";
// Note, base64 encoding is supposed to be b64size = (size + 2) / 3 * 4
// characters long, where size is the size of the encoded string, therefore
// the following assert checks that the size is correct and thus the size
// of the maximum decoded string size can be calculated as
// max_size = 3 * b64size / 4
// https://stackoverflow.com/questions/13378815/base64-length-calculation
OPENSSL_assert((sizeof(encrypted_b64) - 1) % 4 == 0);
unsigned char encrypted[3 * (sizeof(encrypted_b64) - 1) / 4];
unsigned char decrypted[sizeof(encrypted) + 1]; // +1 for terminating 0
int sz_decoded, sz_decrypted;
// Note, do not use EVP_DecodeBlock for decoding from base64 as it returns
// wrong decoded length and ignores padding, see
// https://github.com/openssl/openssl/issues/17197
b64ctx = EVP_ENCODE_CTX_new();
EVP_DecodeInit(b64ctx);
if(EVP_DecodeUpdate(b64ctx, encrypted, &sz_decoded,
(const unsigned char*)encrypted_b64,
sizeof (encrypted_b64) - 1) < 0)
{
EVP_ENCODE_CTX_free(b64ctx);
ERROR("EVP_DecodeUpdate failed");
}
if(EVP_DecodeFinal(b64ctx, encrypted + sz_decoded, &len) <= 0)
{
EVP_ENCODE_CTX_free(b64ctx);
ERROR("EVP_DecodeFinal failed");
}
sz_decoded += len;
EVP_ENCODE_CTX_free(b64ctx);
OPENSSL_assert(sz_decoded <= sizeof(encrypted));
if(scrypt_kdf(
key, &sz_key, (const unsigned char*)password, sizeof(password)-1,
(const unsigned char*)salt, sizeof(salt)-1) <= 0)
{
ERROR("Key derivation failed");
}
OPENSSL_assert(sz_key == sizeof(key));
dctx = EVP_CIPHER_CTX_new();
if (EVP_DecryptInit_ex(dctx, EVP_aes_192_cbc(), NULL, key, iv) <= 0)
{
EVP_CIPHER_CTX_free(dctx);
ERROR("EVP_DecryptInit_ex failed");
}
if(EVP_CIPHER_CTX_set_key_length(dctx, 24) <= 0)
{
EVP_CIPHER_CTX_free(dctx);
ERROR("EVP_CIPHER_CTX_set_key_length failed");
}
if(EVP_DecryptUpdate(dctx, decrypted, &sz_decrypted,
encrypted, sz_decoded) <= 0)
{
EVP_CIPHER_CTX_free(dctx);
ERROR("EVP_DecryptUpdate failed");
}
if(EVP_DecryptFinal_ex(dctx, decrypted + sz_decrypted, &len) <= 0)
{
EVP_CIPHER_CTX_free(dctx);
ERROR("EVP_DecryptFinal_ex failed");
}
EVP_CIPHER_CTX_free(dctx);
sz_decrypted += len;
// do not forget the null terminator
decrypted[sz_decrypted] = 0;
qDebug() << (const char*)decrypted;
}
int main(void)
{
qDebug() << "example1_openssl_and_qt decryption:";
example1_openssl_and_qt();
qDebug() << "example1_pure_openssl decryption:";
example1_pure_openssl();
return 0;
}
I also attach the code I used to generate the additional encrypted data:
const crypto = require('crypto');
const algorithm = 'aes-192-cbc';
const password = 'bncaskdbvasbvlaslslasfhj';
const plaintext = 'Lorem ipsum dolor sit amet, consectetur adipiscing';
const key = crypto.scryptSync(password, 'GfG', 24);
const iv = Buffer.alloc(16, 0);
const cipher = crypto.createCipheriv(algorithm, key, iv);
const encrypted = Buffer.concat([cipher.update(plaintext), cipher.final()]);
console.log(encrypted.toString('base64'));
UPD
C++
void pbkdf2withsha256_pure_openssl(void)
{
int len; // general purpose length variable, used in EVP_*Update/EVP_*Final
EVP_ENCODE_CTX *b64ctx;
const int sz_key = 32;
unsigned char key[sz_key];
// Note, base64 encoding size is supposed to be b64size = (size + 2) / 3 * 4
// characters long, where size is the size of the source string
// https://stackoverflow.com/questions/13378815/base64-length-calculation
unsigned char key_b64[(sz_key + 2) / 3 * 4 + 1];
int sz_key_b64;
const char password[] = "myPassw0rd";
const unsigned char salt[] = "mySalt";
if(PKCS5_PBKDF2_HMAC(password, sizeof(password) - 1, salt, sizeof(salt) - 1,
10000, EVP_sha256(), sz_key, key) < 1)
{
ERROR("PKCS5_PBKDF2_HMAC failed");
}
b64ctx = EVP_ENCODE_CTX_new();
EVP_EncodeInit(b64ctx);
if(EVP_EncodeUpdate(b64ctx, key_b64, &sz_key_b64, key, sz_key) < 0)
{
EVP_ENCODE_CTX_free(b64ctx);
ERROR("EVP_DecodeUpdate failed");
}
EVP_EncodeFinal(b64ctx, key_b64 + sz_key_b64, &len);
sz_key_b64 += len;
EVP_ENCODE_CTX_free(b64ctx);
qDebug() << (const char*)key_b64;
}
JS
crypto = require ('crypto');
crypto.pbkdf2('myPassw0rd', 'mySalt', 10000, 32,
'sha256', (err, key) => {
if (err) throw new Error();
console.log(key.toString('base64'))
})

Can't use Uniswap V3 SwapRouter for multihop swaps, SwapRouter.exactInput(params) throws 'UNPREDICTABLE_GAS_LIMIT'

I'm trying to implement swap with new Uniswap V3 contracts.
I'm using Quoter contract for getting the quotes out and SwapRouter for making the swaps.
If I'm using methods for direct swap (when tokens have pools) for example - -
ethersProvider = new ethers.providers.Web3Provider(web3.currentProvider, 137);
uniSwapQuoter = new ethers.Contract(uniSwapQuoterAddress, QuoterAbi.abi, ethersProvider);
uniSwapRouterV3 = new ethers.Contract(uniSwapRouterAddress, RouterAbi.abi,
ethersProvider.getSigner());
uniSwapQuoter.callStatic.quoteExactInputSingle(.....)
uniSwapQuoter.callStatic.quoteExactOutputSingle(.....)
uniSwapRouterV3.exactInputSingle(params)
everything works fine, but when I try to use the multihop quotes and multihop swaps if fails with
"reason": "cannot estimate gas; transaction may fail or may require manual gas limit",
"code": "UNPREDICTABLE_GAS_LIMIT",
"error": {
"code": -32000,
"message": "execution reverted"
},
"method": "estimateGas",
"transaction": {
"from": "0x532d647481c20f4422A8331339D76b25cA569959",
"to": "0xE592427A0AEce92De3Edee1F18E0157C05861564",
"data": "0xc04b8d59000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000a00000000000000000000000002a6b82b6dd3f38eeb63a35f2f503b9398f02d9bb0000000000000000000000000000000000000000000000000000000861c468000000000000000000000000000000000000000000000000000000000000002710000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000422791bca1f2de4661ed88a30c99a7a9449aa841740005007ceb23fd6bc0add59e62ac25578270cff1b9f619003000c26d47d5c33ac71ac5cf9f776d63ba292a4f7842000000000000000000000000000000000000000000000000000000000000",
"accessList": null
}
for encoding the params I'm using the uniswap example from tests:
function encodePath(tokenAddresses, fees) {
const FEE_SIZE = 3
if (path.length != fees.length + 1) {
throw new Error('path/fee lengths do not match')
}
let encoded = '0x'
for (let i = 0; i < fees.length; i++) {
// 20 byte encoding of the address
encoded += path[i].slice(2)
// 3 byte encoding of the fee
encoded += fees[i].toString(16).padStart(2 * FEE_SIZE, '0')
}
// encode the final token
encoded += path[path.length - 1].slice(2)
return encoded.toLowerCase()
}
and finally my example code I'm doing for quotes:
const routeAndFees = await getAddressPath(path);
const encodedPath = await encodePath(routeAndFees.path, routeAndFees.fees);
const usdcWithDecimals = parseFloat(usdcAmount) * 1000000
const tokenDecimals = path[path.length - 1].tokenOut.decimals;
try {
const amountOut = await uniSwapQuoter.callStatic.quoteExactInput(encodedPath, usdcWithDecimals.toString());
console.log("Token amount out:", parseFloat(amountOut) / (10 ** tokenDecimals));
return {
tokenOut: parseFloat(amountOut) / (10 ** tokenDecimals),
usdcIn: parseFloat(usdcAmount)
};
} catch (e) {
console.log(e);
return e;
}
}
and swapping:
async function multiSwap(path, userAddress, usdcAmount) {
const usdcWithDecimals = parseFloat(usdcAmount) * 1000000
const routeAndFees = await getAddressPath(path);
const encodedPath = await encodePath(routeAndFees.path, routeAndFees.fees);
const params = {
path: encodedPath,
recipient: userAddress,
deadline: Math.floor(Date.now() / 1000) + 900,
amountIn: usdcWithDecimals.toString(),
amountOutMinimum: 0,
}
try {
return await uniSwapRouterV3.exactInput(params);
} catch (e) {
console.log(e);
return e;
}
}
The path is [address,fee,address,fee,address] like it should be, I not sure about the encoding of that, but didn't find any other example. Actually didn't find any example for doing uniswap v3 multihop swaps, even in the UniDocs there is Trade example and single pool swap...
Can someone point what could I have done wrong here?
The same error is in quoting and when swapping :/
I'm testing on Polygon Mainnet and I can make the same path swap directly on uniswap but it fails when I trigger the script...
You should hash the fee value. Instead of 0 add 6. This should work for you:
async function encodePath(path, fees, exactInput) {
const FEE_SIZE = 6
if (path.length !== fees.length + 1) {
throw new Error('path/fee lengths do not match')
}
if (!exactInput) {
path = path.reverse();
fees = fees.reverse();
}
let encoded = '0x'
for (let i = 0; i < fees.length; i++) {
encoded += path[i].slice(2)
let fee = web3.utils.toHex(parseFloat(fees[i])).slice(2).toString();
encoded += fee.padStart(FEE_SIZE, '0');
}
encoded += path[path.length - 1].slice(2)
return encoded
}

blob.slice() results in an empty blob - javascript

I am trying to send a large file to a server, so I am using the chunking technique in order to do it in a robust way.
private readonly sendChunk = (file: File, progressModel: ProgressResponseModel): void => {
const offset = progressModel.Offset;
if (offset > file.size)
throw new Error("Offset cannot be greater than the file size");
const expectedSize = progressModel.ExpectedChunkSize;
const blobChunk = file.slice(offset, expectedSize);
const xhr = new XMLHttpRequest();
xhr.onload = (ev: Event): void => {
if (xhr.readyState === XMLHttpRequest.DONE && xhr.status === 200) {
const progress = this.progressFromText(xhr.responseText);
if (progress.Offset >= 0) {
this.sendChunk(file, progress);
}
console.log(`${progress.Progress} %`);
}
}
xhr.open("POST", this._uploadChunkUrl, true);
xhr.send(blobChunk);
}
The server sends back where to start the new chunk from and how big it should be. The above function is executed in a recursive manner as you can see.
However, if the file requires more than 1 chunk to be sent, the second time I call const blobChunk = file.slice(offset, expectedSize); I get an empty chunk (length 0).
I can guarantee that the file arg is always valid (when console.loged).
I've seen this question, but I am sure my file is not removed or renamed.
I've also seen this issue. I get the same behavior for both Chrome and Firefox (latest versions), also Edge.
Thanks!
UPDATE
Okay, so I did a dummy method in order to isolate this issue:
readonly chunkIt = (file: Blob): void => {
var offset = 0;
var length = 512 * 1024;
while (offset >= 0) {
const blobChunk = file.slice(offset, length);
console.log(blobChunk);
offset += length;
if (offset > file.size) offset = -1;
}
}
And using it:
$("input[name='fileUpload']").on("change", (e) => {
const files = e.target.files;
if (typeof files === "undefined" || files === null || files.length === 0)
e.preventDefault();
const file = files[0];
this._client.chunkIt(file);
});
Logs a correct Blob only the first time, the following are all empty.
SOLVED
From this issue - Splitting a File into Chunks with Javascript it turned out that I've forgot to offset my end index.
You need to use splice() instead of slice()
Replace
const blobChunk = file.slice(offset, expectedSize);
with
const blobChunk = file.splice(offset, expectedSize);

Generated nonce length is getting changed

I am trying to generate fixed length nonce (length 9).
But my code is printing sometimes nonce of 8 length and sometime 9 length.
this is what I am trying to do but with different approach (I have modified it for fixed nonce length)
I am not able to understand why it is printing nonce of length 8 when i am passing length as 9 as argument??
It would be great if someone can tell why this is happening.
Below is complete Nodejs code
var last_nonce = null;
var nonce_incr = null;
// if you call new Date to fast it will generate
// the same ms, helper to make sure the nonce is
// truly unique (supports up to 999 calls per ms).
module.exports = {
getNonce: function(length) {
if (length === undefined || !length) {
length = 8;
}
var MOD = Math.pow(10, length);
var now = (+new Date());
if (now !== last_nonce) {
nonce_incr = -1;
}
nonce_incr++;
last_nonce = now;
var nonce_multiplier = ((nonce_incr < 10) ? 10 : ((nonce_incr < 100) ? 100 : 1000));
var s = (((now % MOD) * nonce_multiplier) + nonce_incr) % MOD;
return s;
}
}
//test code
if(require.main === module) {
console.time("run time");
//importing async module
var async = require('async');
var arr = [];
//generating 1000 length array to use it in making 1000 async calls
//to getNonce function
for(var i=0; i<1000; i++) arr.push(i);
//this will call getNonce function 1000 time parallely
async.eachLimit(arr, 1000, function(item, cb) {
console.log(module.exports.getNonce(9));
cb();
}, function(err) {console.timeEnd("run time");});
}
Sample output:
708201864 --> nonce length 9
708201865
708201866
70820190 --> nonce length 8 (why it is coming 8?? when passed length is 9)
70820191
70820192
70820193
70820194
70820195
70820196
70820197
70820198
70820199
708201910
708201911
708201912
708201913
708201914
708201915
708201916
708201917
708201918
In case someone needs it, here is a nonce generator free from convoluted logic, allowing you to control both character sample and nonce size:
const generateNonce = (options) => {
const {
length = 32,
sample = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
} = options || {};
const getRand = () => Math.floor(Math.random() * sample.length);
return Array.from({ length }, () => sample.charAt(getRand())).join('');
};
If you prefer Typescript:
const generateNonce = (options?: { sample?: string, length?: number }) => {
const {
length = 32,
sample = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
} = options || {};
const getRand = () => Math.floor(Math.random() * sample.length);
return Array.from({ length }, () => sample.charAt(getRand())).join('');
};

Categories

Resources