I am building an online code editor. Currently, I am storing the source file and input file on my server. Is there any way to store the source files on the client-side? I am using UUID to generate a random filename but want to keep it main.c, main.CPP, just online GDB. But if I cannot have a constant file name or collide when other users are using it.
the code to generate the file
const path = require("path");
const { v4: uuid } = require("uuid");
const dirCodes = path.join(__dirname, "codes");
const inputPath = path.join(__dirname, "inputs");
if (!fs.existsSync(dirCodes)) {
fs.mkdirSync(dirCodes, { recursive: true });
}
if (!fs.existsSync(inputPath)) {
fs.mkdirSync(inputPath, { recursive: true });
}
const generateFile = async (jobId, format, content, input) => {
const filename = `${jobId}.${format}`;
const inputFileName = `${jobId}.txt`;
const inputFilePath = path.join(inputPath, inputFileName);
const filepath = path.join(dirCodes, filename);
await fs.writeFileSync(filepath, content);
await fs.writeFileSync(inputFilePath, input);
return [filepath, inputFileName];
};
module.exports = {
generateFile,
};
code for executing the file
const fs = require("fs");
const path = require("path");
const { v4: uuid } = require("uuid");
const dirCodes = path.join(__dirname, "codes");
const inputPath = path.join(__dirname, "inputs");
if (!fs.existsSync(dirCodes)) {
fs.mkdirSync(dirCodes, { recursive: true });
}
if (!fs.existsSync(inputPath)) { random file names will not work for java as we run the class name
fs.mkdirSync(inputPath, { recursive: true });
}
const generateFile = async (jobId, format, content, input) => {
const filename = `${jobId}.${format}`;
const inputFileName = `${jobId}.txt`;
const inputFilePath = path.join(inputPath, inputFileName);
const filepath = path.join(dirCodes, filename);
await fs.writeFileSync(filepath, content);
await fs.writeFileSync(inputFilePath, input);
return [filepath, inputFileName];
};
module.exports = {
generateFile,
};
I want to keep the file name constant. Also, random file names will not work for java as we run the classname.class file.
Please let me know if can store the file in client side or local storage
Unfortunately browsers don't allow Filesystem access for clients as it is a security risk.
If you want to avoid saving files in the server, then you can use localStorage/sessionStorage to persist data accross sessions at the client level.
Then in your case, you'll have a button to "download" the code directory.
we have 2 ways to save some data.
1. localStorage [ permanently ]
2. sessionStorage [ hinges on session ]
for both, the function goes with a string format of your file.
to save some data - localStorage.set('data', JSON.stringify(data))
to retrieve those data - JSON.parse(localStorage.get('data'))
the reason we use for parsing is to convert back to its own format [ object/array ]
to remove data from storage - localStorage.remove('data')
You can use
window.localStorage.setItem(jobId, content);
If the content is a JSON object you can use
window.localStorage.setItem(jobId, JSON.stringify(content));
And retrieve it like this:
let json_object = JSON.parse(localStorage.getItem(jobId));
Related
Apologies if this is straight-forward, I'm very much not a software developer!
I have a web app using Node (Node 16.17.0, npm 8.13.2). I have HTML files that I upload to an Azure Blob Storage container.
I would like to serve the files directly from the storage container to a user.
In the past, I've typically used something like this:
app.get(
'/analysis/example', async (req, res) => {
const a = path.join(__dirname + '/app/analysis/example_file.html');
res.sendFile(a);
}
);
However, I'm struggling a bit finding the appropriate documentation or examples to serve a file from blob storage.
I have managed to find a way to print the list of files and the file itself to the console - so I know for sure that I've managed to gain access properly (Azure documentation is quite good) - but I'm just not sure how to make sure the file is in an appropriate state to be served back.
I've tried this:
// GAIN ACCESS TO THE APPROPRIATE STORAGE ACCOUNT
const blobServiceClient = BlobServiceClient.fromConnectionString(
process.env.AZURE_STORAGE_CONNECTION_STRING
);
const containerClient = blobServiceClient.getContainerClient(
process.env.AZURE_STORAGE_CONTAINER
);
const blobClient = containerClient.getBlobClient(
process.env.AZURE_STORAGE_BLOB
);
// A FUNCTION TO HELP TURN THE STREAM INTO TEXT (TURN THIS INTO SOMETHING ELSE?)
async function streamToText(readable) {
readable.setEncoding('utf8');
let data = '';
for await (const chunk of readable) {
data += chunk;
}
return data;
};
// SERVE THE HTML FILE
app.get(
'/analytics/example', async (req, res) => {
// THIS CHUNK SUCCESSFULLY LISTS AVAILABLE FILES IN THE CONSOLE
// for await (const blob of containerClient.listBlobsFlat()) {
// console.log("\t", blob.name);
// };
const blobDownload = await blobClient.download(0);
const blob = await streamToText(blobDownload.readableStreamBody);
res.sendFile(blob);
}
}
);
I've also tried the final chunk below (I found an online resource that mentioned that DOMParser wouldn't work with Node):
// SERVE THE HTML FILE
app.get(
'/analytics/example', async (req, res) => {
// THIS CHUNK SUCCESSFULLY LISTS AVAILABLE FILES IN THE CONSOLE
// for await (const blob of containerClient.listBlobsFlat()) {
// console.log("\t", blob.name);
// };
const blobDownload = await blobClient.download(0);
const blob = await streamToText(blobDownload.readableStreamBody);
var DOMParser = require('xmldom').DOMParser;
let parser = new DOMParser();
let doc = parser.parseFromString(blob, 'text/html');
res.sendFile(doc.body);
}
}
);
Any help much appreciated.
I've just worked it out - it was simply the "res.sendFile" part, should have been "res.send".
The below is the correct working code to read the file from Azure Storage and serve it back to the app.
// GAIN ACCESS TO THE APPROPRIATE STORAGE ACCOUNT
const blobServiceClient = BlobServiceClient.fromConnectionString(
process.env.AZURE_STORAGE_CONNECTION_STRING
);
const containerClient = blobServiceClient.getContainerClient(
process.env.AZURE_STORAGE_CONTAINER
);
const blobClient = containerClient.getBlobClient(
process.env.AZURE_STORAGE_BLOB
);
// A FUNCTION TO HELP TURN THE STREAM INTO TEXT (TURN THIS INTO SOMETHING ELSE?)
async function streamToText(readable) {
readable.setEncoding('utf8');
let data = '';
for await (const chunk of readable) {
data += chunk;
}
return data;
};
// SERVE THE HTML FILE
app.get(
'/analytics/example', async (req, res) => {
const blobDownload = await blobClient.download(0);
const blob = await streamToText(blobDownload.readableStreamBody);
res.send(blob);
}
}
);
I am using the Google TextToSpeech API in Node.js to generate speech from text. I was able to get an output file with the same name as the text that is generated for the speech. However, I need to tweak this a bit. I wish I could generate multiple files at the same time. The point is that I have, for example, 5 words (or sentences) to generate, e.g. cat, dog, house, sky, sun. I would like to generate them each to a separate file: cat.wav, dog.wav, etc.
I also want the application to be able to read these words from the * .txt file (each word/sentence on a separate line of the * .txt file).
Is there such a possibility? Below I am pasting the * .js file code and the * .json file code that I am using.
*.js
const textToSpeech = require('#google-cloud/text-to-speech');
const fs = require('fs');
const util = require('util');
const projectId = 'forward-dream-295509'
const keyFilename = 'myauth.json'
const client = new textToSpeech.TextToSpeechClient({ projectId, keyFilename });
const YourSetting = fs.readFileSync('setting.json');
async function Text2Speech(YourSetting) {
const [response] = await client.synthesizeSpeech(JSON.parse(YourSetting));
const writeFile = util.promisify(fs.writeFile);
await writeFile(JSON.parse(YourSetting).input.text + '.wav', response.audioContent, 'binary');
console.log(`Audio content written to file: ${JSON.parse(YourSetting).input.text}`);
}
Text2Speech(YourSetting);
*.json
{
"audioConfig": {
"audioEncoding": "LINEAR16",
"pitch": -2,
"speakingRate": 1
},
"input": {
"text": "Text to Speech"
},
"voice": {
"languageCode": "en-US",
"name": "en-US-Wavenet-D"
}
}
I'm not very good at programming. I found a tutorial on google on how to do this and slightly modified it so that the name of the saved file was the same as the generated text.
I would be very grateful for your help.
Arek
Here ya go - I haven't tested it, but this should show how to read a text file, split into each line, then run tts over it with a set concurrency. It uses the p-any and filenamify npm packages which you'll need to add to your project. Note that google may have API throttling or rate limits that I didn't take into account here - may consider using p-throttle library if that's a concern.
// https://www.npmjs.com/package/p-map
const pMap = require('p-map');
// https://github.com/sindresorhus/filenamify
const filenamify = require('filenamify');
const textToSpeech = require('#google-cloud/text-to-speech');
const fs = require('fs');
const path = require('path');
const projectId = 'forward-dream-295509'
const keyFilename = 'myauth.json'
const client = new textToSpeech.TextToSpeechClient({ projectId, keyFilename });
const rawSettings = fs.readFileSync('setting.json', { encoding: 'utf8'});
// base data for all requests (voice, etc)
const yourSetting = JSON.parse(rawSettings);
// where wav files will be put
const outputDirectory = '.';
async function Text2Speech(text, outputPath) {
// include the settings in settings.json, but change text input
const request = {
...yourSetting,
input: { text }
};
const [response] = await client.synthesizeSpeech(request);
await fs.promises.writeFile(outputPath, response.audioContent, 'binary');
console.log(`Audio content written to file: ${text} = ${outputPath}`);
// not really necessary, but you could return something if you wanted to
return response;
}
// process a line of text - write to file and report result (success/error)
async function processLine(text, index) {
// create output path based on text input (use library to ensure it's filename safe)
const outputPath = path.join(outputDirectory, filenamify(text) + '.wav');
const result = {
text,
lineNumber: index,
path: outputPath,
isSuccess: null,
error: null
};
try {
const response = await Text2Speech(text, outputPath);
result.isSuccess = true;
} catch (error) {
console.warn(`Failed: ${text}`, error);
result.isSuccess = false;
result.error = error;
}
return result;
}
async function processInputFile(filepath, concurrency = 3) {
const rawText = fs.readFileSync(filepath, { encoding: 'utf8'});
const lines = rawText
// split into one item per line
.split(/[\r\n]+/)
// remove surrounding whitespace
.map(s => s.trim())
// remove empty lines
.filter(Boolean);
const results = await pMap(lines, processLine, { concurrency });
console.log('Done!');
console.table(results);
}
// create sample text file
const sampleText = `Hello World
cat
dog
another line of text`;
fs.writeFileSync('./my-text-lines.txt', sampleText);
// process each line in the text file, 3 at a time
processInputFile('./my-text-lines.txt', 3);
I am trying to read the csv file inside the Firebase functions so that i can send the mail to the all the records. I am planning to go with the following procedure
upload the csv
fire a on finalize function
read the file and send emails
Below is the function
import * as functions from "firebase-functions";
import * as mkdirp from "mkdirp-promise";
import * as os from "os";
import * as path from "path";
import csv = require('csvtojson');
const gcs = require('#google-cloud/storage')({ keyFilename: 'service-account-credentials.json' });
const csvDirectory = "csv";
export = functions.storage.object().onFinalize(async (object) => {
const filePath = object.name;
const contentType = object.contentType;
const fileDir = path.dirname(filePath);
if(fileDir.startsWith(csvDirectory) && contentType.startsWith("text/csv")) {
const bucket = gcs.bucket(object.bucket);
const file = bucket.file(filePath);
const fileName = path.basename(filePath);
const tempLocalFile = path.join(os.tmpdir(), filePath);
const tempLocalDir = path.dirname(tempLocalFile);
console.log("values", bucket, file, fileName, tempLocalDir, tempLocalFile);
console.log("csv file uploadedeeeed");
await mkdirp(tempLocalDir);
await bucket.file(filePath).download({
destination: tempLocalFile
});
console.log('The file has been downloaded to', tempLocalFile);
csv()
.fromFile(tempLocalFile)
.then((jsonObj) => {
console.log(jsonObj);
})
}
});
While running the code i am only getting csv file uploadeded which i have written inside the console.log and then i get the timeout after 1 minute .i am also not getting the The file has been downloaded to log . Can anybody look at the code and help me to get out of this.
You are mixing up the use of async/await together with a call to then() method. You should also use await for the fromFile() method.
The following should do the trick (untested):
export = functions.storage.object().onFinalize(async (object) => {
const filePath = object.name;
const contentType = object.contentType;
const fileDir = path.dirname(filePath);
try {
if (fileDir.startsWith(csvDirectory) && contentType.startsWith("text/csv")) {
//.....
await mkdirp(tempLocalDir);
await bucket.file(filePath).download({
destination: tempLocalFile
});
console.log('The file has been downloaded to', tempLocalFile);
const jsonObj = await csv().fromFile(tempLocalFile);
console.log(jsonObj);
return null;
} else {
//E.g. throw an error
}
} catch (error) {
//.....
}
});
Also note that (independently of the mixed use of async/await and then()), with the following line in your code
csv().fromFile(tempLocalFile).then(...)
you were not returning the Promise returned by the fromFile() method. This is a key point in Cloud Functions.
I would suggest you watch the official Video Series on Cloud Functions (https://firebase.google.com/docs/functions/video-series/) and in particular the videos on Promises titled "Learn JavaScript Promises".
I need to create a zip file with any PDF what I recieved from Storage AWS, and I am trying do this with ADM-zip in NodeJS, but i cant read the final file.zip.
Here is the code.
var zip = new AdmZip();
// add file directly
var content = data.Body.buffer;
zip.addFile("test.pdf", content, "entry comment goes here");
// console.log(content)
// add local file
zip.addLocalFile(`./tmp/boletos/doc.pdf`);
// // get everything as a buffer
var willSendthis = zip.toBuffer();
console.log(willSendthis)
// // or write everything to disk
zip.writeZip("test.zip", `../../../tmp/boletos/${datastring}.zip`);
As it is this only creates a .zip for each file..zip
I was also facing this issue. I looked through a lot of SO posts. This is how I was able to create a zip with multiple files from download urls. Please keep in mind, I'm unsure this is best practice, or if this is going to blow up memory.
Create a zip folder from a list of id's of requested resources via the client.
const zip = new AdmZip();
await Promise.all(sheetIds.map(async (sheetId) => {
const downloadUrl = await this.downloadSds({ sheetId, userId, memberId });
if (downloadUrl) {
await new Promise((resolve) => https.get(downloadUrl, (res) => {
const data = [];
res.on('data', (chunk) => {
data.push(chunk);
}).on('end', () => {
const buffer = Buffer.concat(data);
resolve(zip.addFile(`${sheetId}.pdf`, buffer));
});
}));
} else {
console.log('could not download');
}
}));
const zipFile = zip.toBuffer();
I then used downloadjs in my React.js client to download.
const result = await sds.bulkDownloadSds(payload);
if (result.status > 399) return store.rejectWithValue({ errorMessage: result?.message || 'Error', redirect: result.redirect });
const filename = 'test1.zip';
const document = await result.blob();
download(document, filename, 'zip');
When trying to access an image in my home directory of Firebase storage with node.js functions, I'm getting [object Object] as a response. I guess I initialized the bucket incorrectly, but not sure where I'm going wrong.
That's the debug info in firebase functions:
ChildProcessError: `composite -compose Dst_Out [object Object] [object Object] /tmp/output_final2.png` failed with code 1
Here's my code:
const admin = require('firebase-admin');
admin.initializeApp();
const storage = admin.storage();
const os = require('os');
const path = require('path');
const spawn = require('child-process-promise').spawn;
exports.onFileChange= functions.storage.object().onFinalize(async object => {
const bucket = storage.bucket('myID.appspot.com/');
const contentType = object.contentType;
const filePath = object.name;
console.log('File change detected, function execution started');
if (object.resourceState === 'not_exists') {
console.log('We deleted a file, exit...');
return;
}
if (path.basename(filePath).startsWith('changed-')) {
console.log('We already changed that file!');
return;
}
const destBucket = bucket;
const tmpFilePath = path.join(os.tmpdir(), path.basename(filePath));
const border = bucket.file("border.png");
const mask1 = bucket.file("mask1.png");
const metadata = { contentType: contentType };
return destBucket.file(filePath).download({
destination: tmpFilePath
}).then(() => {
return spawn('composite', ['-compose', 'Dst_Out', mask1, border, tmpFilePath]);
}).then(() => {
return destBucket.upload(tmpFilePath, {
destination: 'changed-' + path.basename(filePath),
metadata: metadata
})
}); });```
If, with
const bucket = storage.bucket('myID.appspot.com/');
your goal is to initialize the default bucket, you should just do
const bucket = storage.bucket();
since you have declared storage as admin.storage()
UPDATE (following your comment about const border = bucket.file("border.png");)
In addition, by looking at the code of a similar Cloud Function (from the official samples, using ImageMagick and spawn) it appears that you should not pass to the spawn() method some File objects created through the file() method of the Cloud Storage Node.js Client API (i.e. const border = bucket.file("border.png");) but some files that you have previously saved to a temp directory.
Look at the following excerpt from the Cloud Function example referred to above. They define some temporary directory and file paths (using the path module), download the files to this directory and use them to call the spawn() method.
//....
const filePath = object.name;
const contentType = object.contentType; // This is the image MIME type
const fileDir = path.dirname(filePath);
const fileName = path.basename(filePath);
const thumbFilePath = path.normalize(path.join(fileDir, `${THUMB_PREFIX}${fileName}`)); // <---------
const tempLocalFile = path.join(os.tmpdir(), filePath); // <---------
const tempLocalDir = path.dirname(tempLocalFile); // <---------
const tempLocalThumbFile = path.join(os.tmpdir(), thumbFilePath); // <---------
//....
// Cloud Storage files.
const bucket = admin.storage().bucket(object.bucket);
const file = bucket.file(filePath);
const thumbFile = bucket.file(thumbFilePath);
const metadata = {
contentType: contentType,
// To enable Client-side caching you can set the Cache-Control headers here. Uncomment below.
// 'Cache-Control': 'public,max-age=3600',
};
// Create the temp directory where the storage file will be downloaded.
await mkdirp(tempLocalDir) // <---------
// Download file from bucket.
await file.download({destination: tempLocalFile}); // <---------
console.log('The file has been downloaded to', tempLocalFile);
// Generate a thumbnail using ImageMagick.
await spawn('convert', [tempLocalFile, '-thumbnail', `${THUMB_MAX_WIDTH}x${THUMB_MAX_HEIGHT}>`, tempLocalThumbFile], {capture: ['stdout', 'stderr']});
//.....
You can't pass Cloud Storage File type objects to spawn. You need to pass strings that will be used to create the command line. This means you need to download those files locally to /tmp before you can work with them - ImageMagick doesn't know how to work file in Cloud Storage.