How to prepare users password for firebase importUsers? - javascript

I have a large json file with all my previous users. I need to prepare them to be imported. I keep getting this error : Error 4 failed to import: FirebaseAuthError: The password hash must be a valid byte buffer.
What is the proper way to store a hashed password as byte buffer in a json?
var jsonFile = require('./users.json');
var fs = require('fs')
let newArr = []
jsonFile.file.slice(0, 5).map( val => {
newArr.push({
uid: val.id,
email: val.email,
passwordHash: Buffer.from(val.password) // val.password is hashed
})
})
let data = JSON.stringify(newArr);
fs.writeFileSync('newArr.json', data);
In my main import file
var jsonFile = require('./newArr.json');
// I was testing it like that and everything is working fine.
const userImportRecords = [
{
uid: '555555555555',
email: 'user#example.com',
passwordHash: Buffer.from('$2a$10$P6TOqRVAXL2FLRzq9Ii6AeGqzV4mX8UNdpHvlLr.4DPxq2Xsd54KK')
}
];
admin.auth().importUsers(jsonFile, {
hash: {
algorithm: 'BCRYPT',
rounds: 10
}
})

Your first code snippet writes Buffer values to the file system. This doesn't work the way you expect. For instance, try running the following example:
const val = {uid: 'test', passwordHash: Buffer.from('test')};
fs.writeFileSync('newArr.json', JSON.stringify(val));
The resulting file will contain the following text:
{"uid":"test","passwordHash":{"type":"Buffer","data":[116,101,115,116]}}
When you require() this file, the passwordHash gets assigned to the object { type: 'Buffer', data: [ 116, 101, 115, 116 ] }. That's not the Buffer type expected by the importUsers() API.
I believe your newArr variable contains the right kind of array that can be passed into importUsers(). But writing it to the file system, and then reloading it changes the type of all Buffer fields.

I found a workaround to this problem. I'm parsing users.json directly inside the importUsers() file. Since I don't have to store the Buffer inside a json file again, the passwordHash stay a buffer.
This is the right way to do it
let newArr = []
jsonFile.file.slice(0, 5).map( val => {
newArr.push({
uid: val.id,
email: val.email,
passwordHash: Buffer.from(val.password)
})
})
admin.auth().importUsers(newArr, {
hash: {
algorithm: 'BCRYPT',
rounds: 10
}
})

Related

Sending multipart form to backend service with multiple rows of data

I'm trying out a bulk upload functionality where I can get user details like name, email, and profile pic (file) as fields. Upon submitting, they are sent directly to backend models and stored.
I was able to successfully send just one row of data by using Form Data object and appending all the fields including image file field. The issue comes when the uploaded data is more than 1 row. I'm not able to figure out how to send Form Data as array of objects to my backend.
I tried appending formdata to array. So something like
// Convert this
// [
// { name: 'xyz', email: 'xyz#gmai.com', img_name: 'xyz' },
// { name: 'abc', email: 'abc#gmail.com', img_name: 'abc' }
// ]
//
// to
//
// [
// { name: 'xyz', email: 'xyz#gmai.com', image: BinaryFile },
// { name: 'abc', email: 'abc#gmail.com', image: BinaryFile }
// ]
const newDataToSend = []
// append image file to appropriate row according to name
if (imageFiles && imageFiles.length > 0) {
dataToSend.forEach(row => {
const formData = new FormData();
Object.keys(row).forEach(column => {
if (column === "img_name") {
let fileObj;
for (let i = 0; i < imageFiles.length; i++) {
const file = imageFiles.item(i);
if (file && file.name.indexOf(row[column]) > -1) {
fileObj = file;
}
}
if (fileObj) {
formData.append("photo", fileObj);
}
} else {
formData.append(column, row[column]);
}
});
newDataToSend.push(formData);
});
}
This does not work. Throws error code 415 and payload is empty as well.
Are there any other solutions I can try?
could you please add the other codes you have written . possibly you may not have a submit button .
You could do JSON.stringify on the array data like shown in the code snippet:
let formdata = new FormData();
let dataArr = [data1, data2, data3];
formdata.append('dataArr', JSON.stringify(dataArr));
If you have the data in the form of an array of objects, you probably want to do something like this.
const dataToSend = [
{ firstName: 'Robin', lastName: 'Hood'},
{ firstName: 'Kavita', lastName: 'Gurav'},
{ firstName: 'Albert', lastName: 'Einstein'}
];
/** sending the data by the name 'data' */
const formData = new FormData();
formData.append('data', dataToSend);
fetch(<api endpoint>, {
method: 'POST',
body: formData
});
The backend now should be able to read the data submitted from data.
I tried this on my browser, the payload being sent is like below.

Writing to a XLSX template and then sending it as a response in a different function, always returns undefined

What I'm trying to do
Requests come into my server to download a file containing data. The downloading part is in the front-end and works. I grab the data on my backend and then I want to write it into an existing template and return the data.
This is the handler for the request.
async handle(request: Request, response: Response) {
try {
const fileName = 'test.xlsx'
const binary = objectsToTemplateWorkBook()
response.setHeader(
'Content-Disposition',
'attachment; filename=' + fileName
)
response.setHeader('Content-Type', 'application/vnd.openxmlformats')
response.end(binary, 'binary')
} catch (error) {
console.log(error)
response.send(error)
}
}
This is the function that is supposed to write the data into the template.
export const objectsToTemplateWorkBook = ():
Promise<any> => {
var XlsxTemplate = require('xlsx-template')
var dataBlob
// Load an XLSX file into memory
const blob = fs.readFile(
path.join(__dirname, 'test_template.xlsx'),
function (err, data) {
console.log(__dirname)
// Create a template
var template = new XlsxTemplate(data)
// Replacements take place on first sheet
var sheetNumber = 1
// Set up some placeholder values matching the placeholders in the template
var values = {
people: [
{ name: 'John Smith', age: 20 },
{ name: 'Bob Johnson', age: 22 },
],
}
// Perform substitution
template.substitute(sheetNumber, values)
// Get binary data
dataBlob = template.generate()
// ...
}
)
return dataBlob
}
The function seems to write the data to the template because if I log the dataBlob inside the fs.Readfile method it shows me the file. However, the return dataBlob always returns undefined. I know this is due to the async nature, but I have no idea how to fix it quite honestly. So my question to you is: how can I get the dataBlob to my handler to send it as a response?
You can't get the return from a callback function like you're doing here, since they run asynchronously, their return will never be acessible because the external return will be executed before the inner code.
To solve this specific problem you can you the fs.readFileSync function, that executes synchronously and returns a value, that being the buffer you need to pass in your xlsxTemplate constructor. This way, the code turns into:
export const objectsToTemplateWorkBook = ():
Promise<any> => {
var XlsxTemplate = require('xlsx-template')
var dataBlob
// Load an XLSX file into memory
const data = fs.readFileSync(path.join(__dirname, 'test_template.xlsx'))
console.log(__dirname)
// Create a template
var template = new XlsxTemplate(data)
// Replacements take place on first sheet
var sheetNumber = 1
// Set up some placeholder values matching the placeholders in the template
var values = {
people: [
{ name: 'John Smith', age: 20 },
{ name: 'Bob Johnson', age: 22 },
],
}
// Perform substitution
template.substitute(sheetNumber, values)
// Get binary data
dataBlob = template.generate()
// ...
return dataBlob
}
With this you get access to the file buffer returned from the synchronous read file and is able to perform the rest of your operations. Hope it helps :D

Can't read update and save to object csv data typescript/js

It is a typescript
Can anybody help with the followin:
I read data from CSV file
Transform this data on flight (remove some extra columns)
Then I want updated csv in stream get back to variable in the code.
Console.log(updatedCsv) // in stream - displays what I need
BUT!
When I try to push it into array nothing happens and then variable (in which I pushed data from stream) is considered undefined:
import * as fs from "fs";
import * as csv from "csv";
udateCsv(){
fs.createReadStream('allure-report/data/suites.csv')
.pipe(csv.parse({ delimiter: ',', columns: true }))
.pipe(csv.transform((input) => {
console.log(input) // <----- it shows in console data I needed
/* like this:
{
Status: 'passed',
'Start Time': 'Wed Nov 11 17:37:33 EET 2020',
'Stop Time': 'Wed Nov 11 17:37:33 EET 2020',
'Duration in ms': '1',
'Parent Suite': '',
Suite: 'The Internet Guinea Pig Website: As a user, I can log into the secure area',
'Sub Suite': '',
'Test Class': 'The Internet Guinea Pig Website: As a user, I can log into the secure area',
'Test Method': 'Hook',
Name: 'Hook',
Description: ''
}
*/
skipHeaders.forEach((header) => delete input[header]);
this.rowsArray = input // NOTHING HAPPENS, rowsArray: string[] = new Array(); input - I don't know what is the type or if I use push. I can't get this data out of pipe
return input;
}))
.pipe(csv.stringify({ header: true }))
.pipe(fs.createWriteStream( this.path))
AND ALSO
as a workaround I wanted to read the newly generated csv but it is also unseccesfful, looks like I need to use promises. I tried some example from internet but was fail. PLEASE HELP
For those who wondering - I was able to resolve my goal using the following approach:
BUT!! I still wonder how to handle this problem via Promises, async/await approaches.
class CsvFormatter{
pathToNotUpdatedCsv: string
readline: any
readStream: any
headers: any
fieldSchema: string[] = new Array()
rowsArray: string[] = new Array()
constructor(pathToCsv: string, encoding: string) {
this.pathToNotUpdatedCsv = pathToCsv
this.readStream = fs.createReadStream(this.pathToNotUpdatedCsv, encoding = 'utf8');
}
async updateCsv(){
//read all csv lines of not updated file
this.readline = readline.createInterface({
input: this.readStream,
crlfDelay: Infinity
});
//save them to array
for await (const line of this.readline) {
this.rowsArray.push(line)
}
//remove columns in csv and return updated csv array
this.rowsArray = this.getUpdatedRows()
//separating headers and other rows in csv
this.headers = this.rowsArray.shift()
}
getUpdatedRows(){
let headersBeforeUpdate = this.removeExtraQuotes(this.rowsArray[0])
let rowsAfterUpdate = []
let indexesOfColumnToDelete = []
let partOfUpdatedArray = []
//get indexes which will be used for deletion of headers and content rows
skipHeaders.forEach((header) => {
indexesOfColumnToDelete.push(headersBeforeUpdate.indexOf(header))
})
//delete rows by index
this.rowsArray.forEach(row => {
partOfUpdatedArray = this.removeExtraQuotes(row)
indexesOfColumnToDelete.forEach(index=>{
partOfUpdatedArray.splice(index)
})
rowsAfterUpdate.push(partOfUpdatedArray)
})
return rowsAfterUpdate
}

NeutralinoJS storage

This is NeutralinoJS storage API for writing JSON. Is it possible to update JSON file (push data), not just overvrite data with new JS object. How to do that???
// Javascript Object to be stored as JSON
let data = {
bucket : 'test',
content : {
item : 10
}
}
// stores the data into JSON based data store.
Neutralino.storage.putData(data,
// executes on successful storage of data
function () {
console.log('Data saved to storage/test.json');
},
// executes if an error occurs
function () {
console.log('An error occured while saving the Data');
}
);
The Neutralino.storage api takes string instead of JSON to save into local storage.
And you can create your JavaScript Objects to String Very easily, for example:
const myUser = {
name: "John Doe",
age: 19,
married: false
}
const myUserString = JSON.stringify(myUser);
console.log(myUserString); // {"name":"John Doe","age":19,"married":false}
Here you can see how we used JSON.stringify method to convert our JavaScript Object into string.
Now We Can Also Convert generated string back to our javascript object, example:
const myUserString = '{"name":"John Doe","age":19,"married":false}';
const myUser = JSON.parse(myUserString);
console.log(myUser);
So now we can easily store our Objects and arrays to local storage and easily modify them, example:
async function saveToStorage(myUser) {
let myUserString = JSON.stringify(myUser);
await Neutralino.storage.setData('myUser', myUserString);
});
async function loadFromStorage() {
let myUserString = await Neutralino.storage.getData('myUser');
let myUser = JSON.parse('myUser');
return myUser;
}
saveToStorage({
name: "John Doe",
age: 19,
married: false
}).then(async () => {
let myUser = await loadFromStorage();
myUser.name = "Jane Doe"
await saveToStorage(myUser);
});

Map multiple objects to single object in stream

I have some very large (> 500MB) JSON files that I need to map to a new format and upload to a new DB.
The old format:
{
id: '001',
timestamp: 2016-06-02T14:10:53Z,
contentLength: 123456,
filepath: 'original/...',
size: 'original'
},
{
id: '001',
timestamp: 2016-06-02T14:10:53Z,
contentLength: 24565,
filepath: 'medium/...',
size: 'medium'
},
{
id: '001',
timestamp: 2016-06-02T14:10:53Z,
contentLength: 5464,
filepath: 'small/...',
size: 'small'
}
The new format:
{
Id: '001',
Timestamp: 2016-06-02T14:10:53Z,
OriginalSize: {
ContentLength: 123456,
FilePath: 'original/...'
},
MediumSize: {
ContentLength: 24565,
FilePath: 'medium/...'
},
SmallSize: {
ContentLength: 5464,
FilePath: 'small/...'
}
}
I was achieving this with small datasets like this, processing the 'original' size first:
let out = data.filter(o => o.size === 'original).map(o => {
return {
Id: o.id,
Timestamp: o.timestamp,
OriginalSize: {
ContentLength: o.contentLength,
FilePath: o.filepath
}
};
});
data.filter(o => o.size !== 'original').forEach(o => {
let orig = out.find(function (og) {
return og.Timestamp === o.timestamp;
});
orig[o.size + 'Size'] = {
ContentLength: o.contentLength,
FilePath: o.filepath
};
)
// out now contains the correctly-formatted objects
The problem comes with the very large datasets, where I can't load the hundreds of megabytes of JSON into memory all at once. This seems like a great time to use streams, but of course if I read the file in chunks, running .find() on a small array to find the 'original' size won't work. If I scan through the whole file to find originals and then scan through again to add the other sizes to what I've found, I end up with the whole dataset in memory anyway.
I know of JSONStream, which would be great if I was doing a simple 1-1 remapping of my objects.
Surely I can't be the first one to run into this kind of problem. What solutions have been used in the past? How can I approach this?
I think the trick is to update the database on the fly. If the JSON file is too big for memory, then I expect the resulting set of objects (out in your example) is too big for memory too.
In the comments you state the JSON file has one object per line. Therefore using node.js builtin fs.createReadStream and readline to get each line of the text file. Next process the line (string) into a json object, and finally update the database.
parse.js
var readline = require('readline');
var fs = require('fs');
var jsonfile = 'text.json';
var linereader = readline.createInterface({
input: fs.createReadStream(jsonfile)
});
linereader.on('line', function (line) {
obj = parseJSON(line); // convert line (string) to JSON object
// check DB for existing id/timestamp
if ( existsInDB({id:obj.id, timestamp:obj.timestamp}) ) {
updateInDB(obj); // already exists, so UPDATE
}
else { insertInDB(obj); } // does not exist, so INSERT
});
// DUMMY functions below, implement according to your needs
function parseJSON (str) {
str = str.replace(/,\s*$/, ""); // lose trailing comma
return eval('(' + str + ')'); // insecure! so no unknown sources
}
function existsInDB (obj) { return true; }
function updateInDB (obj) { console.log(obj); }
function insertInDB (obj) { console.log(obj); }
text.json
{ id: '001', timestamp: '2016-06-02T14:10:53Z', contentLength: 123456, filepath: 'original/...', size: 'original' },
{ id: '001', timestamp: '2016-06-02T14:10:53Z', contentLength: 24565, filepath: 'medium/...', size: 'medium' },
{ id: '001', timestamp: '2016-06-02T14:10:53Z', contentLength: 5464, filepath: 'small/...', size: 'small' }
NOTE: I needed to quote the timestamp value to avoid a syntax error. From your question and example script I expect you either don't have this problem or already have this solved, maybe another way.
Also, my implementation of parseJSON may be different from how you are parsing the JSON. Plain old JSON.parse failed for me due to the properties not being quoted.
Setup some DB instance, that can store JSON documents. MongoDB or PostgreSQL (recently they introduced jsonb data type for storing json documents). Iterate through the old JSON documents and combine them to the new structure, using the DB as the storage - such that you overcome the memory problem.
I'm quite sure that there is no way how to achieve your goal without either a) compromising speed of the process (drastically) or b) creating poor man's DB from scratch (which seems like a bad thing to do :) )

Categories

Resources