How to implement cryptosystem based on Synchronous Stream Cipher using Vue js - javascript

I have a pseudo-random number generator which is generating binary number based on a user-supplied polynomial. The method i have used to generate this is LFSR. Now, if I understand correctly, I should load the file and convert it to binary form in order to take every next bit of the read data and use the XOR operation with every bit of the generated key. The problem is that I have no idea how to convert the loaded file to binary in such a way that I can later perform XOR operations on every bit of the file with the key bit. The only think i know is that i should use <input type="file" #change="onFileSelected"/> to load file. I'd appreciate any help from community.

Assuming you have a getKeyBit() function that returns a bit of the key:
const getKeyByte = () => {
const byte = []
// Get 8 key bits
for (let i = 0; i < 8; ++i) {
byte.push(getKeyBit())
}
// Parse the byte string as base 2
return parseInt(byte.join(''), 2)
}
const encryptFile = (file) => {
const fileReader = new FileReader()
fileReader.readAsArrayBuffer(file)
return new Promise(resolve => {
fileReader.onload = () => {
const buffer = new Uint8Array(fileReader.result)
// Resolve the promsie with mapped Uint8Array
resolve(buffer.map(byte => {
// XOR each byte with a byte from the key
return byte ^ getKeyByte()
}))
}
})
}
Be sure to await the result:
const encrypted = await encryptFile(file)

Related

Receiving and decoding data from a USB device via navigator.hid

Browser side, I'm connecting to the USB device and getting the buffer data as follows:
const device = (await navigator.hid.requestDevice({ filters: [] }))?.[0]
// at this point the device popup showed up, I selected the device from the list and clicked Connect
await device.open()
device.oninputreport = (report) => {
const buffer = report.data.buffer
console.log(buffer) // output: ArrayBuffer(5)
console.log(new TextDecoder('utf-8').decode(buffer)) // output: �j
console.log(String.fromCharCode.apply(null, new Uint8Array(buffer)))) // output: ÿj
console.log(buffer.toString() // output: �j
console.log(buffer.toString('hex') // output: �j
}
Instead of utf-8 I tried all encodings mentioned here, but I always get something like �j.
Note: when I try to access this weight scale from NodeJS (with the 'usb' module), the buffer is encoded in hexadecimal, and it works with just buffer.toString('hex') (and the result is a string like "030402005005030402005005030402005005"). But in the browser it seems to work differently.
Edit: I found the solution after all:
Instead of using the buffer, it's the DataView object containing it that needs to be used.
Here's all the code to read the data from a DYMO M25 USB weight scale:
const device = (await navigator.hid.requestDevice({ filters: [] }))?.[0]
await device.open()
device.oninputreport = (report) => {
const { value, unit } = parseScaleData(report.data)
console.log(value, unit)
}
function parseScaleData (data: DataView) {
const sign = Number(data.getUint8(0)) == 4 ? 1 : -1 // 4 = positive, 5 = negative, 2 = zero
const unit = Number(data.getUint8(1)) == 2 ? 'g' : 'oz' // 2 = g, 11 = oz
const value = Number(data.getUint16(3, true)) // this one needs little endian
return { value: sign * (unit == 'oz' ? value/10 : value), unit }
}

Update attributes in .env file in Node JS

I am wrting a plain .env file as following:
VAR1=VAL1
VAR2=VAL2
I wonder if there's some module I can use in NodeJS to have some effect like :
somefunction(envfile.VAR1) = VAL3
and the resulted .env file would be
VAR1=VAL3
VAR2=VAL2
i.e., with other variables unchanged, just update the selected variable.
You can use the fs, os module and some basic array/string operations.
const fs = require("fs");
const os = require("os");
function setEnvValue(key, value) {
// read file from hdd & split if from a linebreak to a array
const ENV_VARS = fs.readFileSync("./.env", "utf8").split(os.EOL);
// find the env we want based on the key
const target = ENV_VARS.indexOf(ENV_VARS.find((line) => {
return line.match(new RegExp(key));
}));
// replace the key/value with the new value
ENV_VARS.splice(target, 1, `${key}=${value}`);
// write everything back to the file system
fs.writeFileSync("./.env", ENV_VARS.join(os.EOL));
}
setEnvValue("VAR1", "ENV_1_VAL");
.env
VAR1=VAL1
VAR2=VAL2
VAR3=VAL3
Afer the executen, VAR1 will be ENV_1_VAL
No external modules no magic ;)
I think the accepted solution will suffice for most use cases, but I encountered a few problems while using it personally:
It will match keys that is prefixed with your target key if it is found first (e.g. if ENV_VAR is the key, ENV_VAR_FOO is also a valid match).
If the key does not exist in your .env file, it will replace the last line of your .env file. In my case, I wanted to do an upsert instead of just updating existing env var.
It will match commented lines and update them.
I modified a few things from Marc's answer to solve the above problems:
function setEnvValue(key, value) {
// read file from hdd & split if from a linebreak to a array
const ENV_VARS = fs.readFileSync(".env", "utf8").split(os.EOL);
// find the env we want based on the key
const target = ENV_VARS.indexOf(ENV_VARS.find((line) => {
// (?<!#\s*) Negative lookbehind to avoid matching comments (lines that starts with #).
// There is a double slash in the RegExp constructor to escape it.
// (?==) Positive lookahead to check if there is an equal sign right after the key.
// This is to prevent matching keys prefixed with the key of the env var to update.
const keyValRegex = new RegExp(`(?<!#\\s*)${key}(?==)`);
return line.match(keyValRegex);
}));
// if key-value pair exists in the .env file,
if (target !== -1) {
// replace the key/value with the new value
ENV_VARS.splice(target, 1, `${key}=${value}`);
} else {
// if it doesn't exist, add it instead
ENV_VARS.push(`${key}=${value}`);
}
// write everything back to the file system
fs.writeFileSync(".env", ENV_VARS.join(os.EOL));
}
It looks like - you want to read your current .env file, after you want to change some values and save it.
You should use the fs module from standard Node.js module library: https://nodejs.org/api/fs.html
var updateAttributeEnv = function(envPath, attrName, newVal){
var dataArray = fs.readFileSync(envPath,'utf8').split('\n');
var replacedArray = dataArray.map((line) => {
if (line.split('=')[0] == attrName){
return attrName + "=" + String(newVal);
} else {
return line;
}
})
fs.writeFileSync(envPath, "");
for (let i = 0; i < replacedArray.length; i++) {
fs.appendFileSync(envPath, replacedArray[i] + "\n");
}
}
I wrote this function to solve my issue.
Simple and it works:
for typescript
import fs from 'fs'
import os from 'os'
import path from 'path'
function setEnvValue(key: string, value: string): void {
const environment_path = path.resolve('config/environments/.env.test')
const ENV_VARS = fs.readFileSync(environment_path, 'utf8').split(os.EOL)
const line = ENV_VARS.find((line: string) => {
return line.match(`(?<!#\\s*)${key}(?==)`)
})
if (line) {
const target = ENV_VARS.indexOf(line as string)
if (target !== -1) {
ENV_VARS.splice(target, 1, `${key}=${value}`)
} else {
ENV_VARS.push(`${key}=${value}`)
}
}
fs.writeFileSync(environment_path, ENV_VARS.join(os.EOL))
}

How do I loop over a VERY LARGE 2D array without causing a major performace hit?

I am attempting to iterate over a very large 2D array in JavaScript within an ionic application, but it is majorly bogging down my app.
A little background, I created custom search component with StencilJS that provides suggestions upon keyup. You feed the component with an array of strings (search suggestions). Each individual string is tokenized word by word and split into an array and lowercase
For example, "Red-Winged Blackbird" becomes
['red','winged','blackbird']
So, tokenizing an array of strings looks like this:
[['red','winged','blackbird'],['bald','eagle'], ...]
Now, I have 10,000+ of these smaller arrays within one large array.
Then, I tokenize the search terms the user inputs upon each keyup.
Afterwards, I am comparing each tokenized search term array to each tokenized suggestion array within the larger array.
Therefore, I have 2 nested for-of loops.
In addition, I am using Levenshtein distance to compare each search term to each element of each suggestion array.
I had a couple drinks so please be patient while i stumble through this.
To start id do something like a reverse index (not very informative). Its pretty close to what you are already doing but with a couple extra steps.
First go through all your results and tokenize, stem, remove stops words, decap, coalesce, ects. It looks like you've already done this but i'm adding an example for completion.
const tokenize = (string) => {
const tokens = string
.split(' ') // just split on words, but maybe rep
.filter((v) => v.trim() !== '');
return new Set(tokens);
};
Next what we want to do is generate a map that takes a word as an key and returns us a list of document indexes the word appears in.
const documents = ['12312 taco', 'taco mmm'];
const index = {
'12312': [0],
'taco': [0, 1],
'mmm': [2]
};
I think you can see where this is taking us... We can tokenize our search term and find all documents each token belongs, to work some ranking magic, take top 5, blah blah blah, and have our results. This is typically the way google and other search giants do their searches. They spend a ton of time in precomputation so that their search engines can slice down candidates by orders of magnitude and work their magic.
Below is an example snippet. This needs a ton of work(please remember, ive been drinking) but its running through a million records in >.3ms. Im cheating a bit by generate 2 letter words and phrases, only so that i can demonstrate queries that sometimes achieve collision. This really doesn't matter since the query time is on average propionate to the number of records. Please be aware that this solution gives you back records that contain all search terms. It doesn't care about context or whatever. You will have to figure out the ranking (if your care at this point) to achieve the results you want.
const tokenize = (string) => {
const tokens = string.split(' ')
.filter((v) => v.trim() !== '');
return new Set(tokens);
};
const ri = (documents) => {
const index = new Map();
for (let i = 0; i < documents.length; i++) {
const document = documents[i];
const tokens = tokenize(document);
for (let token of tokens) {
if (!index.has(token)) {
index.set(token, new Set());
}
index.get(token).add(i);
}
}
return index;
};
const intersect = (sets) => {
const [head, ...rest] = sets;
return rest.reduce((r, set) => {
return new Set([...r].filter((n) => set.has(n)))
}, new Set(head));
};
const search = (index, query) => {
const tokens = tokenize(query);
const canidates = [];
for (let token of tokens) {
const keys = index.get(token);
if (keys != null) {
canidates.push(keys);
}
}
return intersect(canidates);
}
const word = () => Math.random().toString(36).substring(2, 4);
const terms = Array.from({ length: 255 }, () => word());
const documents = Array.from({ length: 1000000 }, () => {
const sb = [];
for (let i = 0; i < 2; i++) {
sb.push(word());
}
return sb.join(' ');
});
const index = ri(documents);
const st = performance.now();
const query = 'bb iz';
const results = search(index, query);
const et = performance.now();
console.log(query, Array.from(results).slice(0, 10).map((i) => documents[i]));
console.log(et - st);
There are some improvements you can make if you want. Like... ranking! The whole purpose of this example is to show how we can cut down 1M results to maybe a hundred or so canidates. The search function has some post filtering via intersection which probably isn't what you want you want but at this point it doesn't really matter what you do since the results are so small.

Loop through subfiles and get result of of the sum of contents in Javascript

I'm struggling to get the sum of the subfiles. The code below currently returns the sum of a.txt and all its subfiles, supposing that the contents of a.txt is
1
b.txt
the contents of b.txt is
2
c.txt
and the contents of c.txt is
3
I'd like to also get the sum of b.txt and all of its subfiles, the sum of c.txt and all of its subfiles, and so on and so forth for all the files that exist. So the output would be: the sum of a.txt and its subfiles is sum, the sum of b.txt and its subfiles is sum, the sum of c.txt and its subfiles is sum, and so on...
My code below:
const fs = require('fs')
const file = 'a.txt'
let output = (file) => {
let data = fs.readFileSync(file, 'utf8')
.split('\n')
.reduce((array, i) => {
if (i.match(/.txt$/)) {
let intArr = array.concat(output(i))
return intArr
} else if (i.match(/^\d+$/)) {
array.push(parseInt(i, 10));
}
return array;
}, [])
return data
}
console.log(output(file))
const sum = output(file)
console.log(sum.reduce((a, b) => a + b, 0))
Also, any suggestions for improving this code are welcome.
This can be viewed as a pretty standard graph search. Your code starts to do that but there's a few places it can be changed to make it a little easier.
Below is a depth first search starting with a particular file and keeping track of a counts object. The function parses the file just like yours, adds the numbers the counts object. Then it recurses. When the recursion unwinds it add the resulting child's counts to the parents. In the end it returns the counts object which should have the total + subpages for all pages. It doesn't do any error checking for simplicity and it's not clear what should happen if two children both reference the same grandchild - should it be counted twice? Either was it should be easy to adjust.
I made mocked version of fs.readFileSync to the code would run in the snippet and be easier to see:
// fake fs for readFileSync
let fs = {
files: {
"a.txt": "1\nb.txt",
"b.txt": "2\nc.txt",
"c.txt": "3",
"d.txt": "2\n10\ne.txt\nf.txt",
"e.txt": "1",
"f.txt": "5\n7\ng.txt",
"g.txt": "1\na.txt"
},
readFileSync(file) { return this.files[file]}
}
function dfs(file, counts = {}) {
// parse a sinlge file into object
// {totals: sum_allthenumbers, files:array_of_files}
let data = fs.readFileSync(file, 'utf8').split('\n')
let {total, files} = data.reduce((a, c) => {
if(c.match(/^\d+$/)) a.total += parseInt(c)
else if(c.match(/.txt$/)) a.files.push(c)
return a
},{total: 0, files:[]})
// add the total counts for this file
counts[file] = total
// recurse on children files
for (let f of files) {
if (counts.hasOwnProperty(f)) continue // don't look at files twice if there are cycles
let c = dfs(f, counts)
counts[file] += c[f] // children return the counts object, add childs count to parent
}
// return count object
return counts
}
console.log("original files starting with a.txt")
console.log(dfs('a.txt'))
console.log("more involved graph starts with d.txt")
console.log(dfs('d.txt'))

index extraction in tensorflow-js

I am using tensorflow-js to do some image manipulation on the browser.
I have a tensor of type bool, and I want to extract the indexes of true/1 values from it.
Is there a way I can do it without getting the whole tensor as an array via Tensor.data()?
At the moment I am doing something like this:
let array = await tensor.data()
for(let i = 0; i <array.length;i++) {
if (array[i]){
//do Something
};
};
but it takes too long on large tensors 600 ms plus on CPU.
This still uses tensor.data() but it gives you a tensor which contains all the indices you want. (+1 because otherwise the multiplication always results in 0 wether the first value is 0 or not)
tf.tidy(() => {
const boolTensor = tf.randomUniform([10], 0, 2, "bool");
boolTensor.print();
const indices = tf.range(1, boolTensor.shape[0] + 1);
//starting at 1 to prevent 0x0
indices.print();
const ones = boolTensor.cast("float32").mul(indices);
ones.print();
ind = Array.from(ones.dataSync()).filter(i => i > 0).map(i => i - 1);
//map to go back to 0-indexed
console.log(ind);
});
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs#0.10.0">
</script>

Categories

Resources