Create a file in the Cloud Functions / temp directory - javascript

I'm trying to create a file that stores room information so I can use it in other functions
I have the following code
const fs = require('fs')
const { tmpdir } = require('os');
fs.writeFileSync(`${tmpdir}/room-keys.tmp`, `${room.key}\n`, (err) => {
if (err) throw err
functions.logger.info(`New room registered, ${JSON.stringify(room)}`)
})
Every time I try to create a file, it returns an error saying that the directory does not exist

Firstly, you're not using the os.tmpdir import correctly. If you want to generate a path to the temp directory, you would do it like this:
const os = require('os');
const path = require('path');
const tmp = os.tmpdir();
const file = path.join(tmp, "file.ext");
Secondly, you should not expect that the temp directory will be present for future function invocations. It is also not shared at all between different deployed functions. It should only be used during the processing of a single request, and deleted before the function ends. It is a memory-based filesystem, so writing files there consumes memory.

Related

How can I process a file only if criteria are met in a nodejs express like middleware style?

PROBLEM :
Packages (multer, formidable, busboy...) usually store the file directly on the disk or memory with a custom validation in options. My problem is that, when I send fields & files at the same time, I don't want to waste resources on processing the entire file on the disk, or memory or even in a external storage like AWS S3, I want to stop everything and return an error to the client.
I would like the following illustrated in the express route above :
1°) Get the file and fields metadata in a multipart request and attach them to the request object
file : size + mimetype
fields : values
2°) Some validation on the properties created from previous step (validation with express-validator)
3°) technologyController.createTechnology which does the following :
If ok : Process the stream of the file & store it directly into AWS S3 (without it being store in memory or in the local disk)
If error : Doesn't process the file & don't create a record in the database but return a list of error to client
I hope my requirements are clear enough, if not don't hesitate.
I can't figure out how to do it properly, it's been days.
Code samples :
multipart.js
const formidable = require('formidable');
exports.getMetadata = (req, res, next) => {
const form = formidable({
uploadDir: `${__dirname}/../controllers/uploads`,
keepExtensions: true,
});
const filesInfos = [];
form.onPart = (part) => {
// I can filter here based on the mimetype but I don't have the information about the file size
// The formidable package doesn't include it
if (!part.mimetype) return form._handlePart(part);
filesInfos.push({ name: part.name, mimetype: part.mimetype })
};
form.parse(req, (err, fields, files) => {
// when parse() is called, files & fields are being processed already
});
return next();
};
route.js
const express = require('express');
const technologyController = require('../controllers/technologyController');
const multipart = require('../utils/multipart');
const validation = require('../utils/validation');
const router = express.Router();
router.route('/')
.post(
multipart.getMetadata, // get metadata from files & fields on multipart header
validation.technology, // validation with express-validator module
technologyController.createTechnology, // validate fields + files (size, mimetype) && create a record in database if ok
)
;
module.exports = router;
The code samples above are available in the public gist I made here : https://gist.github.com/bottom-up-ai/e411f3f7551261d15a93dbe1f459ed80
Thank you guys for your time and answers :)

Downloading a zip file from a given path in express api + react

So I'm completely lost at this point. I have had a mixture of success and failure but I can't for the life of me get this working. So I'm building up a zip file and storing it in a folder structure that's based on uploadRequestIds and that all works fine. I'm fairly new to the node but all I want is to take the file that was built up which is completely valid and works if you open it once it's been constructed in the backend and then send that on to the client.
const prepareRequestForDownload = (dirToStoreRequestData, requestId) => {
const output = fs.createWriteStream(dirToStoreRequestData + `/Content-${requestId}.zip`);
const zip = archiver('zip', { zlib: { level: 9 } });
output.on('close', () => { console.log('archiver has been finalized.'); });
zip.on('error', (err) => { throw err; });
zip.pipe(output);
zip.directory(dirToStoreRequestData, false);
zip.finalize();
}
This is My function that builds up a zip file from all the files in a given directory and then stores it in said directory.
all I thought I would need to do is set some headers to have an attachment disposition type and create a read stream of the zip file into the res.send function and then react would be able to save the content. but that just doesn't seem to be the case. How should this be handled on both the API side from reading the zip and sending to the react side of receiving the response and the file auto-downloading/requesting a user saves the file.
This is what the temp structure looks like
There is some strategies to resolve it, all browser when you redirect to URL where extension ending with .zip, normally start downloading. What you can do is to return to your client the path for download something like that.
http://api.site.com.br/my-file.zip
and then you can use:
window.open('URL here','_blank')

Stream array of remote files to amazon S3 in Node.js

I have an array of URLs to files that I want to upload to an Amazon S3 bucket. There are 2916 URLs in the array and the files have a combined size of 361MB.
I try to accomplish this using streams to avoid using too much memory. My solution works in the sense that all 2916 files get uploaded, but (at least some of) the uploads seem to be incomplete, as the total size of the uploaded files varies between 200MB and 361MB for each run.
// Relevant code below (part of a larger function)
/* Used dependencies and setup:
const request = require('request');
const AWS = require('aws-sdk');
const stream = require('stream');
AWS.config.loadFromPath('config.json');
const s3 = new AWS.S3();
*/
function uploadStream(path, resolve) {
const pass = new stream.PassThrough();
const params = { Bucket: 'xxx', Key: path, Body: pass };
s3.upload(params, (err, data) => resolve());
return pass;
}
function saveAssets(basePath, assets) {
const promises = [];
assets.map(a => {
const url = a.$.url;
const key = a.$.path.substr(1);
const localPromise = new Promise(
(res, rej) => request.get(url).pipe(uploadStream(key, res))
);
promises.push(localPromise);
});
return Promise.all(promises);
}
saveAssets(basePath, assets).then(() => console.log("Done!"));
It's a bit messy with the promises, but I need to be able to tell when all files have been uploaded, and this part seems to work well at least (it writes "Done!" after ~25 secs when all promises are resolved).
I am new to streams so feel free to bash me if I approach this the wrong way ;-) Really hope I can get some pointers!
It seems I was trying to complete too many requests at once. Using async.eachLimit I now limit my code to a maximum of 50 concurrent requests which is the sweetspot for me in terms of trade-off between execution time, memory consumption and stability (all of the downloads completes every time!).

NodeJS sending an image(which may be modified) to the client with middleware

I want to mention that the image file is changing continuously.
I'm using the middleware of NodeJS:
app.use("/image.jpg",express.static(_dirname+"/image.jpg")
The problem is that the Node conveys the image.jpg without really telling that the file has been modified.
Pressing a button this part will occur.
var image=new Image();
image.onload=function(){rendering to canvas}
image.src="/image.jpg";
Somehow there is a problem...
The server's picture file gets modified then it emits to the client to draw the image, regarding to the results, the client is rendering the first image it has loaded again, although the image on the url has been changed.
I think the client is caching the image? and thinks that the image is unmodified so it keeps using it.
Is there a way to draw the current image on the url?
Are there even better methods?
You can use the fetch() API to implement cache-busting without cluttering your client's browser cache with a bunch of /image.jpg?bust=... resources.
After deciding on a folder in your server that you want to allow static access to changing files (this is preferable to the pattern where you allowed static access to a single file), you can implement your real-time updates using fs.watch() like so:
Node app.js (with express 3/4):
const fs = require('fs')
const path = require('path')
const app = require('express')()
const server = require('http').Server(app)
const io = require('socket.io')(server)
server.listen(process.env.PORT || 8080)
app.use('/', express.static(path.resolve(__dirname, './watched-directory')))
//...
io.on('connection', (socket) => {
//...
});
fs.watch('watched-directory', {
//don't want watch process hanging if server is closed
persistent: false,
//supported on Windows OS / Mac OSX
recursive: true,
}, (eventType, filename) => {
if (eventType === 'change') {
io.emit('filechange', filename)
}
})
Browser index.html:
<script src="/socket.io/socket.io.js"></script>
<script>
let socket = io.connect()
socket.on('filechange', async (filename) => {
console.log(filename)
let response = await fetch(filename, { cache: 'no-store' })
let blob = await response.toBlob()
let url = URL.createObjectURL(blob)
//modified from your question
let image = new Image()
image.addEventListener('load', () => {
//your canvas rendering code here
})
image.src = url
})
</script>

Assigning an html file to a variable [duplicate]

i'm pretty new into NodeJs. And i am trying to read a file into a variable.
Here is my code.
var fs = require("fs"),
path = require("path"),
util = require("util");
var content;
console.log(content);
fs.readFile(path.join(__dirname,"helpers","test.txt"), 'utf8',function (err,data) {
if (err) {
console.log(err);
process.exit(1);
}
content = util.format(data,"test","test","test");
});
console.log(content);
But every time i run the script i get
undefined and undefined
What am i missing? Help please!
As stated in the comments under your question, node is asynchronous - meaning that your function has not completed execution when your second console.log function is called.
If you move the log statement inside the the callback after reading the file, you should see the contents outputted:
var fs = require("fs"),
path = require("path"),
util = require("util");
var content;
console.log(content);
fs.readFile(path.join(__dirname, "helpers", "test.txt"), 'utf8', function (err, data) {
if (err) {
console.log(err);
process.exit(1);
}
content = util.format(data, "test", "test", "test");
console.log(content);
});
Even though this will solve your immediately problem, without an understanding of the async nature of node, you're going to encounter a lot of issues.
This similar stackoverflow answer goes into more details of what other alternatives are available.
The following code snippet uses ReadStream. It reads your data in separated chunks, if your data file is small it will read the data in a single chunk. However this is a asynchronous task. So if you want to perform any task with your data, you need to include them within the ReadStream portion.
var fs = require('fs');
var readStream = fs.createReadStream(__dirname + '/readMe.txt', 'utf8');
/* include the file directory and file name instead of <__dirname + '/readMe.txt'> */
var content;
readStream.on('data', function(chunk){
content = chunk;
performTask();
});
function performTask(){
console.log(content);
}
There is also another easy way by using synchronous task. As this is a synchronous task, you do not need to worry about its executions. The program will only move to the next line after execution of the current line unlike the asynchronous task.
A more clear and detailed answer is provided in the following link:
Get data from fs.readFile
var fs = require('fs');
var content = fs.readFileSync('readMe.txt','utf8');
/* include your file name instead of <'readMe.txt'> and make sure the file is in the same directory. */
or easily as follows:
const fs = require('fs');
const doAsync = require('doasync');
doAsync(fs).readFile('./file.txt')
.then((data) => console.log(data));

Categories

Resources